RokDoc 2026.2 focuses on throughput where rock physics users spend most of their time. Rock physics work is often limited less by “can the model be built” and more by how quickly it can be iterated, validated against wells, and carried through to the volumes or deliverables needed by the wider subsurface team. This release focuses on reducing compute overhead in Rock Physics Modelling-heavy (RPM) workflows while also broadening where machine-learning regression can be applied. The net effect is more time spent evaluating assumptions and less time waiting on recalculation.
Many practical rock physics workflows in RokDoc rely on repeated RPM evaluation: testing mineral and fluid scenarios, updating end-member trends, regenerating synthetic responses, and then pushing results into base property models used downstream in 2D, 3D, or 4D sessions. In this release, performance improvements target RPM calculations that use mineral or fluid mixing. The changes apply not only to internal RPMs, but also to “programmer” RPMs created from crossplot trendlines, which are frequently used to capture local empirical behavior while still staying anchored to physics-based constraints.
In day-to-day terms, the improvement shows up in two places. First, forward modelling becomes more responsive when the input assumptions are adjusted and outputs need to be recomputed across sections or volumes. Second, base property model workflows in 2D, 3D, and 4D sessions benefit from faster RPM evaluation. That matters when the rock physics model is not an end in itself but a parameterization that needs to propagate into subsequent interpretation, inversion conditioning, or export products. For large 3D exports, performance gains can be substantial and, depending on survey size, can approach an order-of-magnitude reduction in runtime. This type of change tends to alter working practice: it becomes realistic to run more sensitivity cases and to keep models closer to the latest well and petrophysical updates instead of batching changes for “overnight” recompute.
If your organization treats rock physics rigor as a governance requirement, the performance uplift has a scientific dimension as well. Faster iteration makes it easier to test alternative mixing laws or uncertainty bounds explicitly, rather than relying on a single “best guess” parameter set because compute cost discourages exploration.
Petrophysics to elastic properties to synthetic seismic modeling is faster than ever.
Machine-learning regression is often introduced on 3D projects, yet many high-consequence choices occur earlier, when interpretation relies on sparse 2D coverage, regional reconnaissance, or a small number of wells. Deep QI XGBoost regression now runs in 2D sessions, which makes it practical to apply the same trained relationships on 2D models and along regional seismic lines. This supports workflows such as screening alternative play concepts across a basin transect, extending well-calibrated relationships along a strike line, or generating property estimates in areas where 3D coverage is not available but decisions on maturation, well placement candidates, or data acquisition still need quantitative support. (Deep QI requires the appropriate license.)
Some environments standardize log types beyond the defaults, either because of corporate naming conventions or integration with enterprise data platforms. A fix in this release ensures External Interface modules recognize custom log types defined in Global Settings. In practical terms, this reduces manual remapping and the potential for setup mistakes when moving between projects, especially where multiple teams contribute wells and derived curves. The outcome is a more repeatable workflow!
While not specific to the Rock Physics module, two platform changes are worth noting because they affect typical rock physics working conditions. First, RokDoc now caps memory usage at 60 percent of system RAM by default, with configuration available through an environment variable. On machines shared with other applications, that tends to reduce contention and improve predictability during compute-heavy sessions. Second, SEG-Y import now detects when the wrong loader is selected and guides the workflow toward the appropriate reader. That small guardrail helps earlier in the chain when seismic context is brought in to frame rock physics QC or to prepare integrated interpretation.
If a deeper technical “how-to” would be useful, please get in touch! Possible follow-on topics include a step-by-step workflow for training and applying XGBoost regression in 2D with well-based validation, a practical guide to building trendline-derived programmer RPMs that remain physically interpretable, or a performance-focused note on scaling RPM forward modelling for large 3D exports.
To help prioritize those, it would be helpful to hear which stage is most time-sensitive in your current workflows: model calibration at the well, scenario testing, or pushing results into downstream sessions and exports. Feedback on where the performance changes show up most strongly in real projects is also valuable to us, since survey characteristics can influence the magnitude of the improvement.