software & services

13
Software & Services Qualify and Quantify Your Reservoir

Upload: others

Post on 12-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software & Services

Software & Services Qualify and Quantify Your Reservoir

Page 2: Software & Services

2010 Software and Services2

Header Subhead

cggveritas.com/hampson-russell 3

Hampson-Russell Software for Every Dimension

Table of Contents:

Page 3 CE9 Development Plan

Page 4 - 5 eLOG

Page 6 - 7 AVO Modeling

Page 8 - 9 AVO Reconnaissance

Page 10 - 11 AFI

Page 12 - 13 STRATA

Page 14 - 15 EMERGE

Page 16 - 17 ISMAP

Page 18 - 19 PRO4D

Page 20 - 21 PROMC

Page 22 View3D

Page 23 Training & Reservoir Services

Who We Are

Our words are as valid today, when Hampson-Russell has close to 50 employees located in offices around the world, as they were in 1987 when we founded our start-up company in Calgary with four employees. We built our reputation by partnering with our clients, getting to understand their challenges and goals, and delivering effective, customized solutions.

Today Hampson-Russell helps petroleum and service companies across North and South America, Europe, Asia, the Middle East and Australia, specializing in AVO Analysis, Seismic Inversion, Reservoir Characterization and Near-Surface Refraction Analysis. With input from our many customers, we feel confident that we can meet the challenges of the future.

CE9 Development Plan

Our objective for software updates and new releases is to concentrate on making your life as a Hampson-Russell software user much simpler and more productive.

Our focus is to significantly improve our software in the upcoming CE9 release. With CE9, you will be able to do more evaluation with more data and better results, but with less effort, fewer steps and a far lower learning curve. To achieve that, we will use the latest proven interface designs and a more powerful software development tool.

The Key Points

Integration• Integrate all of our modules and programs, including SeisLoader and

View3D, into one package. • Centralize functionality common to all programs into a single module,

thereby eliminating duplication of effort.• Improve the links with GeoFrame, OpenWorks and OpenSpirit. • Improve the data management and the visual display of that data

for projects.

Simplification• Create an easier interface by supporting drag-and-drop and context

sensitive right-clicks on most display windows. • Add more advanced, but easy to use, plotting features to maps,

seismic cross sections, log tracks and OpenGL 3D displays. • Simplify the entry of parameters for all log and seismic processes.

Processing Capability• Speed up the seismic processing and reduce the overall analysis time

by providing a chained option for typical processes. • Allow the use of multiple cores and CPUs. • Compile the programs in a true 64-bit mode to handle larger data sets. • Add more seismic processes for multiple 2D lines in addition to 3D• Work in the depth domain in addition to time.

Dan Hampson Brian Russell

“Hampson-Russell has been providing innovative geophysical software and services since 1987. Our goal has been to research and deliver these tools in a form which is accessible to the working geophysicist. While the techniques may be sophisticated, we always strive to determine the key process flow which makes these methods work reliably on your data.”

Workflow Approach• Integrate a workflow orientation to guide both novice and expert

users in most functions. • Select a detailed expert approach, a faster everyday workflow, or

define a specialized workflow with parameters set for particular areas.

The expected release of CE9/R1 is the first quarter of 2010.

Page 3: Software & Services

2010 Software and Services4 cggveritas.com/hampson-russell 5

eLOG Well Log Conditioning and Modeling

eLOG is a comprehensive well log editing and modeling tool designed to prepare or create log data for use within the Hampson-Russell suite of applications. eLOG is launched from each of the Hampson-Russell programs, or can be used as a standalone application.

Log Editing• eLOG provides graphical log

editing functions to fix or prepare logs for modeling.

• A library of log transforms is supplied to synthesize missing logs required for modeling. In many cases, these transforms are designed for the “wet” case and must be processed through fluid substitution to properly represent the in-situ hydrocarbon case.

• A log math tool kit is provided to create customized log transforms.

Log Cross PlotsWell log cross plotting is a powerful tool for identifying and classifying anomalies.

• Modeled or in-situ logs can be cross plotted and overlaid on interpretive petrophysical templates.

• Zones highlighting clusters and anomalies, such as wet trend sands or target zones, are graphically drawn on the cross plots and projected back on the curve display.

Wavelet ExtractionA key element in modeling and well-to-seismic correlation is the wavelet used to generate the synthetics. While a generic wavelet will model a seismic response, an accurate comparison to existing seismic data requires a wavelet matched to or extracted from the seismic.

• There are two methods of wavelet extraction in the eLOG program. One method compares the well log reflectivity with the seismic data and calculates an operator, which shapes one into the other. This is the preferred method. However, it requires a good well-to-seismic tie in time.

• The second method uses the seismic data alone to calculate a constant phase wavelet whose amplitude spectrum matches that of the seismic. An automated phase cross-correlation procedure helps match the phase. This is generally the wavelet used during the initial well-to-seismic correlation process. Once the well is properly positioned in time, the full phase extraction method listed above is performed.

Fluid Replacement ModelingFluid replacement modeling (FRM) using the Biot-Gassmann approach allows a seismic response in the reservoir to be analyzed with varying fluid types and saturations.

• Customary petrophysical parameters to be analyzed are the density and moduli of each of the constituent components: matrix, hydrocarbon and brine.

• The default parameters are “book” values. However, the fluid and matrix properties calculators allow for calculations from more fundamental measurements.

• The Batzle-Wang fluid properties take into account parameters such as pressure, temperature, fluid gravities, salinity and GOR.

• Matrix properties can be modeled from precise specification of the mineralogy, and matrix averaging techniques are designed for different types of reservoirs.

• It is important to allow for exclusion zones when modeling sands with thin streaks of shale or other non-porous events. This is accomplished by setting rock property constraints in the model based on boundaries in log values.

eLOG provides well log based functionality which is required prior to utiliz-ing the well data in the advanced technologies found in Hampson-Russell programs. These same requirements exist in most integrated well and seismic technologies. eLOG is therefore provided as a standalone program as well as being embedded in the Hampson-Russell suite of programs.

Well-to-Seismic CorrelationIt is critical in all of the Hampson-Russell programs, as well as most others, that the wells used to integrate measured rock property data with seismic data correlate correctly in time.

• The check shot utility in eLOG includes options on how the check shot measurements are honored and applied. Graphical displays showing drift curve and required velocity deviations are used to QC the data.

• After check shot correction, manual corrections are made to optimize the correlation in time. Each well is individually “tied” to the seismic via a comparison of its synthetic seismogram to a composite seismic trace extracted following the well bore. Manual matching of the synthetic to the seismic is accomplished with mouse clicks on each, initiating a stretch-and-squeeze operation whose parameters and effect on the time-depth relationship are user defined.

Page 4: Software & Services

2010 Software and Services6 cggveritas.com/hampson-russell 7

AVO Modeling

Practical work in AVO methodology generally falls into two separate, yet linked, workflows; AVO modeling for in-situ and “what if” scenarios, and AVO reconnaissance of the prestack seismic data. The Hampson-Russell AVO program provides both, and intermingles the two workflows into a seamless and thorough AVO analysis.

In an AVO study, modeling is usually performed first, in order to determine what type of AVO anomaly may be anticipated.

WaveletsA key element in modeling is the wavelet used to generate the synthetic seismogram. While a generic wavelet will model an AVO effect, an accurate comparison to existing seismic data requires a wavelet matched to the seismic.

• There are two methods of wavelet extraction in the AVO program. One method compares the well log reflectivity with the seismic data and calculates an operator, which shapes one into the other. This is the preferred method. However, it requires a good well-to-seismic tie in time.

• The second method uses the seismic data alone to calculate a constant phase wavelet whose amplitude spectrum matches that of the seismic. An automated phase cross-correlation procedure helps match the phase. This is generally the wavelet used during the initial well-to-seismic correlation process. Once the well is properly positioned in time, the full phase extraction method listed above is performed.

• A library of wavelets is generated based on the wells in the target area and the extraction methods employed. In some instances it is appropriate to create multi-well wavelets based on wavelet averaging.

Synthetic CalculationModeling utilities in the AVO program are designed to create 1D, 2D and 3D synthetic offset or angle gathers from well log data. An analysis of the real seismic gathers is used to parameterize the models for direct comparison.

• The AVO modeling operation will create an offset/angle-dependent syn-thetic using ray-tracing to calculate the incidence angles and the Zoep-pritz, Aki-Richards or Full Wave equations to calculate the amplitudes.

• For modeling seismic geometries with sufficient angle range, the “third” term of Aki-Richards equation, curvature, is employed.

• Although Zoeppritz and Aki-Richards equations are the most commonly used options, multilayer AVO modeling requires computation of the full elastic wave solution (with optional anelastic effects) which includes primaries, converted waves, and multiples. Elastic wave modeling is also used to model critical and post-critical events.

• In the cases where you suspect that transverse anisotropy is affecting the AVO response, Thomsen’s parameters are added to include anisotropy in the synthetic seismogram.

• Changes in reservoir thickness can be modeled as 2D wedge models using the same synthetic calculation parameters. Additional information is required regarding thickness change, impact on the area outside the zone, and changes in the depth-time relationship. Modeled well logs, in addition to the synthetic gathers, can be captured at any position in the wedge. A tuning analysis chart is generated which graphically shows am-plitude variations along one of the wedge interfaces. This highlights the point of maximum amplitude which corresponds to the tuning thickness.

• Oftentimes an AVO anomaly cannot be adequately modeled by changing one reservoir parameter at a time. AVO provides a 3D modeling technique which systematically modifies two reservoir parameters simultaneously; one in the inline direction and the other in the cross-line direction. Typical parameters used for modeling are porosity, water saturation, density or thickness, among others. The resultant modeled cube has synthetic gathers at each grid cell or a stacked response.

Model AnalysisAn AVO model is the link between the modeled logs resulting from fluid substitution or other changes and the real data. The AVO program provides several qualitative and quantitative approaches to comparison that can help distinguish subtle yet important differences, or add confidence that the “what if” scenarios modeled are valid.

• Graphical comparisons between real and synthetic gathers can be made by plotting the corresponding reflection and transmission coefficients for one or two interfaces.

• A further comparison can be made by simultaneously plotting the amplitudes from any number of events that have been picked from real or synthetic data. This graphical display provides a more detailed analysis between modeled and real data than visual inspection or difference plots.

• Interactive and automatic prestack event picking is available, and can utilize post-stack picks as seed picks in the prestack domain.

• Modeled logs can be cross plotted and overlaid on interpretive petro-physical templates. An understanding of rock physics is crucial for the interpretation of AVO anomalies.

• Synthetic models can be displayed on intercept-gradient cross plots for Rutherford Classification analysis and similarly compared to real data.

The AVO program analyzes the fluid content of reservoir rocks through a combination of visual, analytical and modeling processes which utilize both prestack seismic and well log data. Advanced tools and practical work-flows are provided which allow this complex technology to be a cornerstone of the exploration effort.

Fluid Replacement ModelingFluid replacement modeling (FRM) using the Biot-Gassmann approach allows an AVO response in the reservoir to be analyzed with varying fluid types and saturations.

• A library of log transforms is supplied to synthesize missing logs required for AVO modeling. In many cases, these transforms are designed for the “wet” case and must be processed through fluid substitution to properly represent the in-situ hydrocarbon case.

• Customary petrophysical parameters to be analyzed are the density and moduli of each of the constituent components: matrix, hydrocarbon and brine.

• The default parameters are “book” values. However, the fluid and matrix properties calculators allow for calculations from more fundamental measurements.

• The Batzle-Wang fluid properties take into account parameters such as pressure, temperature, fluid gravities, salinity and GOR.

• Matrix properties can be modeled from precise specification of the mineralogy, and matrix averaging techniques are designed for different types of reservoirs.

• It is important to allow for exclusion zones when modeling sands with thin streaks of shale or other non-porous events. This is accomplished by setting rock property constraints in the model based on boundaries in log values.

Page 5: Software & Services

2010 Software and Services8 cggveritas.com/hampson-russell 9

AVO Reconnaissance

Practical work in AVO methodology generally falls into two separate, yet linked, workflows; AVO modeling for in-situ and “what if” scenarios, and AVO reconnaissance of the prestack seismic data. The Hampson-Russell AVO program provides both, and intermingles the two workflows into a seamless and thorough AVO analysis.

AVO reconnaissance is a study of the prestack seismic response to identify and evaluate amplitude-versus-offset anomalies in order to determine the fluid contents of the rocks.

Data ConditioningA basic assumption in AVO reconnaissance is that the seismic gather data has been processed to be noise-free, while preserving the amplitude response. As AVO-compliant data is often not the norm, the AVO program includes a complete set of processing tools for optimally conditioning the data for analysis.

• The angle gather process transforms gathers from the offset domain into the incident angle domain. The AVO program will automatically convert offset-to-incident angle for all calculations, but this function lets you see the offset-to-incident angle process in detail, before you start performing the AVO analyses.

• Bandpass and Inverse Q filters are used to reduce noise and compensate for attenuation.

• The INVEST Radon filter is used to perform multiple elimination and random noise suppression.

• The NMO process is used to apply a Normal Moveout correction to a range of CDP gathers, using one or more velocity function(s).

• Residual NMO is applied to correct errors in a previously applied NMO function.

• The Mute feature applies an offset-dependent mute (or “ramp”) to a range of gathers, with options for inner, outer and surgical mutes.

• The super gather process analyzes the gathers and calculates a number of “super-gathers”, in which each trace represents a range of offsets. CDPs are averaged to enhance the signal-to-noise ratio and decrease the size of the data set, while maintaining amplitude-versus-offset information.

• The trim statics process attempts to determine an optimal shift to apply to each trace in a gather to fix migration moveout problems.

• AVO Offset Scaling is designed to correct systematic offset-dependent amplitude distortion.

AVO AttributesThe AVO Attribute process analyzes seismic gathers to determine either the zero-offset or gradient components of the amplitude (A,B,C), or the zero-offset P-wave and S-wave reflection coefficients (Rp,Rs) at each time sample.

• The raw attribute volumes are rarely used in this form. Instead, other AVO attributes are calculated from them. A few of the most popular are: - AVO Product (A*B) - Scaled Poisson’s Ratio Change (aA+bB) - Shear Reflectivity (aA-bB) - Fluid Factor (Rp-1.16[Vs/Vp]Rs) - Vp/Vs

• AVO attribute maps are generated either as horizon extractions from the AVO attribute volumes, or from intercept/gradient calculations made at events on each gather identified by prestack horizon picks. Interactive and automated prestack event picking is utilized in the AVO program.

Post-stack horizon picks, either generated in the program or imported from ascii files, can be used as seed picks for prestack interpretation.

• A gradient analysis plot is used to test AVO attribute parameters prior to generating AVO volumes or maps. It also helps in determining the classification of the AVO anomaly.

• AVO attribute cross plotting is a powerful tool for identifying and classi-fying AVO anomalies. Zones highlighting clusters and anomalies, such as wet trend sands or target zones, are graphically drawn on the cross plots and projected back on the seismic data.

• Polarization analysis (Hodogram) is used to reduce the effect of wavelet interference in the Intercept/Gradient cross plots. The calculated polarization angles are plotted trace by trace as a function of time. Also generated is the Polarization Product attribute, which is Polarization Angle multiplied by the length of the Polarization Vector.

• The AVO program generates range-limited stacks averaged over specified offset or angle ranges. In addition to highlighting AVO bright spots, they are a required component in elastic impedance inversion.

Graphical DisplayMuch of AVO reconnaissance involves a visual inspection of the seismic gathers, stacks, and attributes. The AVO program has advanced display options designed to facilitate the simultaneous interpretation of multiple volumes and maps.

• Seismic sections (gathers, stacks, attributes) and map views can be linked together for simultaneous navigation.

• Arbitrary lines can be displayed or extracted by drawing on base maps or horizon maps.

• Individually selected gathers can be displayed or extracted by mouse clicking on maps.

• Multiple seismic volumes can be overlaid and represented by colors or traces.

• A variable-shaped “panner” window including one data volume can be positioned on top of another data volume and moved with the mouse.

• Seismic attributes based on multiple volumes in one window are calculated on the fly and displayed as color or wiggle.

• Cross plot zones can be displayed on any of the seismic sections or maps.• Well logs or synthetics are plotted with deviation paths.• View3D is a companion program supplied with the AVO program for 3D

viewing of seismic, horizon and well data. Among the numerous display options are: - Oblique slices, fence diagrams, probes, contour maps, well bores with

colored amplitudes, and opacity tools to enhance objects and cross plot zones.

AVO reconnaissance requires a combination of visualization, processing and analytical tools to identify and evaluate Amplitude-Versus-Offset anomalies from seismic gathers. The Hampson-Russell AVO program is the premier tool available to the geophysicist endeavoring to identify ever more subtle hydrocarbon reservoirs.

Page 6: Software & Services

2010 Software and Services10 cggveritas.com/hampson-russell 11

Calibration of the Real DataThe ultimate goal is to make comparisons between the real data and the modeled data to determine probabilities of encountering different fluid types. Scalers are required to plot the real and simulated data simultaneously.

• While the simulated data accurately reflect the range of intercept and gradient for reflection coefficients, the real data values typically have an arbitrary scaling. There may also be errors in the gradient due to processing limitations.

• AFI automatically derives scalers which can be applied to the real data points to convert them to the same amplitude range as the model data.

• AVO attribute maps are generated from seismic gathers or angle range stacks along specified horizons. AFI includes prestack gather event pick-ing for this purpose. Zones identified as “wet” or hydrocarbon-bearing are defined graphically on these maps. The data from these zones is then overlaid on the modeled intercept-gradient cross plots for comparison and validation of the scalers.

• Bayes’ Theorem is then used to predict the probability that AVO attribute values derived from real seismic volumes can be associated with each of the modeled fluid types. AFI produces maps of gas, oil or hydrocarbon probabilities, and fluid indicator maps set with a minimum acceptable probability.

Predictions from AVO, as with all geophysical predictions, are really prob-ability statements. The level of uncertainty varies greatly. This depends not only on the seismic data quality, but on the “non-uniqueness” in the relationship between the target lithology and the seismic measurement. AFI measures that uncertainty, allowing the interpreter to better under-stand and quantify AVO decisions.

AFI AVO Uncertainty Analysis

AFI (AVO Fluid Inversion) estimates uncertainty in fluid predictions from AVO analysis. Designed as a seamless companion to the AVO program, AFI uses the techniques of Biot-Gassmann fluid substitution, Monte Carlo simulation, and Bayesian estimation to build fluid probability maps. These are then used to make a quantitative analysis of the probability of exploration success.

Modeling

In order to assess the probability of fluid types from the actual seismic data, AFI must first model the range of seismic AVO responses associated with expected rock property variations.

• AFI starts with the assumption that the target reservoir can be represented by a three-layer model, with a sand layer enclosed by shale layers.

• The shales are assumed to be wet, while the sand is modeled with brine, oil and gas alternately.

• The rock physics parameters for each layer can be defined as a probabil-ity distribution, which determines the relative spread of values expected for that parameter.

• Many of the required parameters can come from a trend analysis of well logs in the area. In particular, p-wave sonic, density and porosity logs are used for this analysis. Gamma Ray or similar lithology logs are used to differentiate sands from shales.

• The rock physics parameters in the sand layer are more complex than those in the shale layers. - Brine, Gas, Oil and Matrix Modulus - Brine, Gas, Oil and Matrix Density - Porosity - Shale Volume - Water Saturation - Thickness

• These rock physics parameters are used in the Biot-Gassmann substitu-tion to calculate the effects of changing fluids within the sand layer.

• The resultant stochastic models vary with depth, as the rock properties vary due to compaction trends.

• Wavelets are extracted from the seismic data and used to determine the influence of layer thickness and event tuning on the model.

• By representing lithologic parameters as probability distributions, we can calculate the range of expected AVO responses. This allows us to investi-gate the uncertainty in AVO predictions.

Monte Carlo SimulationMonte Carlo simulation is used to determine the probable AVO response for brine, oil and gas. The responses are displayed as Intercept and Gradient cross plots.

• The Monte Carlo simulations are performed at a series of depth levels, to model the varying trends in the area.

• Although only one interface of the model is used to calculate the intercept and gradient values for the cross plot (generally the sand top), the sand thickness is still modeled because of wavelet interference from the second interface.

• Starting from the Brine Sand case, the corresponding Oil and Gas Sand models are generated using Biot-Gassmann substitution. This creates three points on the I-G cross plot. By repeating this process many times, we get a probability distribution for each of the three fluid types. Because the trends are depth-dependent, so are the predicted distributions

Page 7: Software & Services

2010 Software and Services12 cggveritas.com/hampson-russell 13

STRATA Seismic Inversion

Inversion is the process of extracting, from seismic data, the underlying geology which gave rise to that seismic. Inversion results are a fundamental component of reservoir characterization, and a flagship technology within Hampson-Russell Software. The STRATA program provides different method-ologies appropriate for varying geologic scenarios and available data.

Inversion TypesSTRATA provides a number of different inversion methods. The traditional approach is to use post-stack seismic data to invert for acoustic impedance. A more recent advance utilizes prestack seismic data with the aim of extracting both acoustic and shear impedance, along with density.

• Post-stack: Model Based, Sparse Spike, Colored, Recursive and Neural Network

• Prestack: Simultaneous, Elastic, Independent and Lambda-mu-rho

Initial ModelThe geologically consistent initial impedance model serves as both input and constraints for the inversion.

• An important component of the initial model is the set of seismic horizons used as guides for the interpolation of well data. The horizons represent both structural and stratigraphic constraints to the model. Inter-horizon layering for interpolation can represent Top Lap, Base Lap or Conformable Scenarios. Horizons can be interpreted within the STRATA program or imported from other programs.

• Following interpolation, a time domain filter is applied to the model when only the low-frequency component is desired for the inversion.

• It is critical that all the wells used in the initial model correlate correctly with the seismic data. After check shot correction, manual corrections are made to optimize the correlation in time. Each well is individually “tied” to the seismic via a comparison of its synthetic seismogram to a composite seismic trace extracted following the well bore. Manual match-ing of the synthetic to the seismic is accomplished with mouse clicks on each, initiating a stretch-and-squeeze operation whose parameters and effect on the time-depth relationship are user defined.

• The wavelet used for the synthetic trace is extracted from the seismic in a multi-stage iterative process. Initially a constant phase wavelet whose amplitude spectrum matches the seismic data is extracted. A cross- correlation procedure indicates time shifts required to improve the fit. Once the well and seismic are reasonably tied, a complex phase wavelet can be extracted using both the seismic and well data to model the shaping operator.

• A multi-well averaged wavelet is generally used in the actual inversion process. Prestack Inversion utilizes different wavelets for each angle, to account for frequency variation caused by frequency-dependent absorption and NMO tuning.

Post-Stack InversionAlthough eclipsed somewhat by the new methodologies in Pre-Stack Inversion, Post-Stack Inversion is still a fundamental component of reservoir characterization.

• Required data is more commonly present than that necessary for Pre-Stack Inversion.

• Model based inversion is the most widely used method within STRATA.• Volumes of acoustic impedance are the products of Post-Stack Inversion.

Prestack Seismic InversionThis process is also referred to as Simultaneous Inversion, as two or more lithologic volumes are created simultaneously. The typical products of this process are acoustic impedance, shear impedance and density, although other combinations are also possible.

• STRATA has two options for the input seismic data for Prestack Inversion. - Fully processed NMO-corrected CDP gathers - Two or more angle stacks

A background relationship between P-impedance, S-impedance and Density is determined from the well information, and used to stabilize the inversion process. Prestack inversion is especially useful for analyzing data with AVO anomalies.

Inversion DiagnosticsInversion analysis is performed at the well locations to optimize the inversion parameters prior to final computation.

• Inversions are run based on the selected parameters, and compari-sons are made between the inverted, modeled and original well data for impedance, density, Vp/Vs, etc.

• Synthetic seismic traces calculated from the inverted impedance are compared with original seismic traces at each well location.

• Inversion parameters can be interactively modified to instantly see the new inversion results and associated diagnostic displays.

• Cross validation is applied for each well in the initial model, where-by the well is hidden from the inversion process and compared with the subsequent result.

• Inversion error is calculated by subtracting the inversion synthetic, calculated from the impedance traces and the inversion wavelet, from the input seismic. Localized events apparent in the Inversion error signify problem areas for review.

The final inversion can be run with time or horizon boundaries and results in a SEGY volume, which can be navigated, interpreted and displayed exactly as the original seismic volume. Regardless of the type of inversion run, the STRATA program provides an intuitive and easy-to-use methodology for integrating inversion products into the reservoir characterization workflow.

Page 8: Software & Services

2010 Software and Services14 cggveritas.com/hampson-russell 15

EMERGE Multi-Attribute Analysis

EMERGE is designed to predict rock properties away from the borehole, using well logs and attributes of the seismic data. The rock properties may be any measured log type such as porosity or velocity, or may be derived lithologic attributes such as volume of shale. Using multi-linear regression or neural network analysis, EMERGE “trains” at the well locations to learn the underlying transform which connects the log and seismic data. It then applies that training result, transforming the entire 3D seismic volume into a volume of the log property.

Training Data (Target Logs)Any measured or calculated log data can be chosen as the target to be predicted, but must be present at all of the training locations.

• EMERGE assumes the training logs are “noise”-free. They need to be QC’ed for logging problems, and preferably reviewed by a petrophysicist for possible measurement corrections.

• Well-to-seismic ties are critically important, as sample-to-sample comparisons are made between the target log & seismic attributes in time. EMERGE utilizes P-wave logs in the training wells to perform the depth-to-time conversion. Check shot corrections and log-to-seismic correlations, which update the P-wave log, are preformed in eLOG, a companion program provided with EMERGE. Residual time shifts between the target logs and the seismic attributes, which may exist despite the correlation efforts, are corrected automatically using a cross-correlation technique

• A good distribution of wells representing the range of expected rock properties is optimum.

Training Data (Seismic Attributes)Seismic attributes can be internally or externally derived, and are sample-based (volumes), rather than horizon-based (maps). The number of input attribute volumes for consideration is unlimited. Horizon-based attributes are used in a similar manner in ISMAP; a geostatistical mapping program which utilizes the EMERGE algorithm for multiple maps.

• A number of different attribute types are calculated automatically within the EMERGE program.

• Many other useful seismic attributes are generated outside the EMERGE program. This is because they are “difficult” to produce or are proprietary to other software packages.

• Whether generated internally or externally, all attributes are considered equally viable until their relationships to the target property are discov-ered and ranked in the training process.

TrainingA workflow utilizing multi-linear regression and neural network prediction uncovers the underlying transform which connects the log and seismically derived attribute data. Complementary features from several attributes will combine to discriminate subtle features on the target logs, which none of the individual attributes could predict by themselves.

• The EMERGE prediction should be focused on the reservoir level, rather than on a large time range representing varied geologic episodes. This allows the transform to be designed for a more specific set of rock properties.

• The seismic wavelet is not used independently in the EMERGE process, other than its importance in producing good well-to-seismic ties. It is, however, part of the derived relationship as a consequence of using seismic attributes.

• At each sample, the target log is modeled as a linear combination of several attributes, using a process called multi-linear stepwise regression. - The single best attribute is found by trial and error; cross plot-

ting each attribute with the target and determining the Prediction Error of each (RMS difference between the target and the predicted value).

- All attribute pairs are tested in the same manner, given that the single best attribute is a member of the pair.

- Attribute groups with increasing numbers of members are tested. The previous group is always a subset of the new larger group.

- The attribute groups are then ranked by Prediction Error.• A fundamental challenge to this approach resides in the difference in

frequency content between logs and seismic data. EMERGE utilizes a convolutional operator to extend the cross plot regression to include neighboring samples from the attributes, which have a possible relation-ship with the given log sample.

Neural NetworksEMERGE uses neural networks to account for non-linear relationships be-tween the target logs and attributes. This can increase both the predictive power and the resultant resolution of generated rock property volumes.

• Several different neural networks are available. - Multi-Layer Feedforward Network (MLFN) - Probabilistic Neural Network (PNN) - Radial-Basis Function Network (RBF)

• Generally, the optimum set of attributes determined in the multi-linear stepwise regression is used with the neural network, which determines the nonlinear weighting factors for each attribute.

• As neural networks operate best on data with stationary statistics, the option exists to remove the trend determined in the multi-linear regres-sion calculation, and add it back after the neural network calculation is made on the residual data.

ApplicationThe final step is to use the relationship determined in the training process to transform the seismic volume into a trace-for-trace replacement with the target rock property. These rock property volumes are ready for further interpretation or used as input to geomodeling applications.

• EMERGE is also used to classify an input seismic sample into one of N classes, rather than a direct mapping to rock properties. This is accomplished using discriminate analysis, or in the case of non-linear separation between classes, neural networks. In addition to creating a classification volume such as facies or lithology, the probability of encountering any given class is determined.

• EMERGE has a very useful feature in its log-to-log prediction. In this case, EMERGE predicts logs which are missing or corrupted in some manner. Many log types have pre-existing regression equations. However, EMERGE calculates a “local” derivation of a new statistical relationship for log predictions. The process used is the same as with the seismic attributes. However, in this case, available log suites are used in place of attributes for calculating the desired log type.

EMERGE is a program designed to merge well log and seismic data, predict-ing rock property volumes using attributes of the seismic data. Statistical analysis is used to determine the attribute-log relationship, including neural networks when nonlinear relationships are encountered. Extensions of the EMERGE process are used for classification and log predictions from other logs.

ValidationValidation is a critical step in the EMERGE process. Theoretically, the inclusion of more and more attributes to the transform will create a better and better match to the training data. Unfortunately, there is a risk of “over training” on the available data set, which degrades the ability to make accurate predictions as you move away from the bore-hole. The key to success lies not only in determining which attributes, and their weights, are best suited to predicting the targeted rock prop-erty, but discovering when to stop adding more attributes to the mix.

• EMERGE uses an automated validation process which systematically “hides” each of the training wells and predicts its values using the transform calculated from the other wells.

• A Validation Curve, showing predicted error rate versus number of attributes used, graphically displays how many attributes optimally predict the target property.

• A similar validation process is used to determine the size of the convolutional operator.

• The program also has easy procedures for making user defined “blind” tests on the fly.

Page 9: Software & Services

2010 Software and Services16 cggveritas.com/hampson-russell 17

ISMAP Geostatistical Mapping

ISMAP is a geostatistical mapping program designed to help the geosci-entist integrate multiple sets of geological or geophysical measurements, in order to characterize a sparsely measured attribute, whether that be a reservoir property, formation depth or velocity value. Typical data inputs to the program are a combination of dense, “fuzzy” data such as seismic attributes, and sparse, accurate data such as measured well log values. The resultant maps honor the sparse (well) data exactly, while using the dense (seismic) data to determine spatial continuity properties which drive the inter-well interpolation.

Generating the Input DataISMAP utilizes rock property measurements extracted from well logs, and maps generated from volumes of seismically derived data.

• A range of horizon-based attribute maps, representing the dense data, can be quickly generated from seismic volumes. Seismic attribute maps are also accessed from previous Hampson-Russell projects or imported as ASCII files.

• The EMERGE multi-attribute transform algorithm has been adapted for map-based calculations in ISMAP. This allow us to use a collection of attribute maps simultaneously, selected for their combined correlation to the target property, rather than relying on single attributes with untested relevance. See the EMERGE description for a detailed explanation of the process.

• Well log data, representing the desired target reservoir property, consist of a single value extracted or generated at each well location. This data is accessed directly from the wells in the database or imported via ASCII files.

Determining the Spatial RelationshipAmongst the features in ISMAP are a set of geostatistical analysis tools for measuring and modeling the spatial continuity patterns in the data. This gives a qualitative assessment of the ability to accurately predict the target property as distance from the well bore increases. It also distin-guishes trends in the data which require special consideration.

• Initial cross plotting provides insight into whether the dense data is an appropriate indicator of the target property. It will also highlight, for editing, outliers which may be detrimental to the calculation of spatial continuity. The regression fit from the cross plot can be applied to the seis-mic data for a first-pass look at a map of the target property.

• A range of variogram modeling options exists for measuring the spatial continuity in the data sets. Smoothed variograms are automatically computed for wells, seismic, and wells+seismic. The user, however, has control over the variogram design param-eters such as number of struc-tures, type of model (shape), sill, range and nugget.

• To investigate for anisotropy, ISMAP utilizes covariance maps which show the calculated vari-ances as a function of direction as well as distance. Multiple directionally-based variograms are utilized when directional trends in the data are observed.

Mapping MethodsGeostatistical mapping is a method of interpolation which predicts unknown values from data at observed locations. Whether using one or two data sets, the process minimizes the prediction error, therefore producing an “optimal prediction” which tends to be smooth in nature.

• ISMAP contains several geostatistical interpolators which incorporate the spatial continuity defined through variogram modeling. Single vari-able mapping utilizes Kriging, while multiple data sets require the more complex Cokriging, Collocated Cokriging, Kriging with a Trend Model (KT) or Kriging with External Drift (KED).

• Geostatistical algorithms assume that the data is stationary, or in other words, organized in a Gaussian distribution. As this is often not the case, tools are provided to analyze trends in the data. Options exist for removal of the trends for dual computing followed by trend restoration (KT), or an automated method (KED) which determines and accounts for the trends simultaneously.

Diagnostic ToolsAs with any interpretive process, risk reduction is enhanced when the results are analyzed throughout the workflow. ISMAP includes several diag-nostic steps which increase our confidence that the initial data is adequate for geostatistical computing, and that the results demonstrate an accept-able probability of error.

• A critical assumption in geostatistical mapping is that the input data is fairly Gaussian in nature. The data histogram plot provides an under-standing of data distribution, and can be calculated for the initial data and all subsequent maps in the workflow.

• Cross validation is the process of deleting one well at a time from the kriging calculation, predicting its value from the other wells, and dis-playing the mis-fit error associated with the prediction at that location.

• Another assumption in geostatistical mapping is that the error variance is minimized. An estimate of error is therefore calculated at each point for every map generated.

Stochastic SimulationStochastic simulation involves creating a series of equally probable realizations, or models, which honor the sparse (well) data exactly, and at the same time display the spatial continuity properties implicit in the variogram. The most popular type of simulation, and that used in ISMAP, is Sequential Gaussian Simulation.

• The simulation maps differ from the kriged or cokriged maps in that they contain the possibility of large deviations or outliers. Although they honor the sparse (well) data, they are not “optimally” smoothed, therefore representing a range of equi-probable scenarios.

• It is difficult to assimilate all of the information by viewing the many multiple realizations generated. ISMAP analyzes and presents the simulation results in three, user-parameterized methods for viewing the distribution of features. - Average: An average of all the maps - Probability: A map of probability that a certain range of values is

present - Indicator: A distribution map of those values which reach a prob-

ability threshold

The use of geostatistics is becoming much more main stream in reservoir characterization workflows. ISMAP provides a comprehensive and easy-to-use platform for the interpreting geoscientist to apply this technology to everyday interpretation challenges.

Page 10: Software & Services

2010 Software and Services18 cggveritas.com/hampson-russell 19

PRO4D Time-Lapse Seismic Interpretation

PRO4D integrates all the key elements required for time-lapse seismic monitoring. The general objective is to track production-related changes in the reservoir and determine areas of bypassed production, or inefficiencies in the production process. The program includes a well log toolkit, fluid replacement and rock physics modeling, synthetic seismic generation and a library of functions for the display, comparison, calibration, interpretation and inversion of multiple vintage 3D seismic data.

Rock Physics and Synthetic ModelingRock physics relationships provide the bridge between the primary reservoir properties and the seismic response. PRO4D supports advanced rock phys-ics modeling that assesses how the seismic response will be affected by changes in fluid saturation, pressure and temperature.

• Multiple scenarios for a single well can be rapidly generated using Systematic Changes modeling. Synthetic traces (pre- or post-stack) are automatically calculated and organized into a systematic grid for display.

• Three-dimensional synthetic volumes can also be created from rock prop-erty models imported from reservoir simulation.

• Synthetic volumes with modeled production effects are compared with “base” models to create difference volumes for comparison to seismic difference volumes.

Seismic Survey CalibrationThe challenge in calibrating surveys is in removing unwanted differences (spurious differences related to acquisition/processing and near-surface changes due to seasonal temperature variations, tidal effects and changes in weathering or seabed sediments) while not altering the desired differ-ences related to production-induced anomalies.

• PRO4D contains comprehensive survey calibration features that match the phase, frequency, amplitude and event times of base and monitor surveys in areas where production has not occurred.

• Multiple methods exist in PRO4D to assess the quality of the seismic matches.

• Once the background differences have been removed, the production-induced anomalies can be interpreted with confidence.

• Time-lapse analysis, involving multiple vintages of surveys, generates a very large and confusing amount of data. The advanced data manage-ment system in PRO4D facilitates organization and ensures consistency of processing and interpretation of all volumes at every stage throughout the calibration process.

Analysis of Production-Related ChangesA wide range of time-lapse attributes can be generated that highlight production anomalies in the data.

• Extracting difference amplitudes from the reservoir zone allows us to highlight the reflectivity changes caused by the production process.

• Volumes of amplitude ratio, cross-correlation coefficient, and time shifts further define the extent of the production effects.

• An important component for analysis between multiple time-lapse sur-veys often lies in the comparison of seismic attributes following horizons to characterize a stratigraphic zone of interest. PRO4D includes advanced horizon management which simplifies the use of related horizons from multiple vintages of surveys or attribute volumes.

Time-Lapse InversionInverting time-lapse seismic volumes can provide detailed information regarding acoustic impedance changes between the base and monitor surveys, which gives us insight into saturation changes.

• Time-lapse inversions are typically compromised by a lack of low-frequency velocity information, and time-variant misalignment between base and monitor data. When this occurs, the impedance (or velocity) differences do not accurately represent the reservoir changes. These changes in velocity below the frequency content of the seismic wavelet are calculated in PRO4D from the cross-correlation and time shift cubes already used to determine time-variant statics in the calibration process. The inclusion of the missing low-frequency velocity information in the initial model greatly improves the inversion results.

VolumetricsPRO4D contains an advanced volumetric analysis capability that facilitates the comparison of time-lapse anomalies to production and injection data at their associated wells.

• Interpretation of the time-lapse seismic response and its relation to production information is challenging in part due to the non-uniqueness of the 4D response. Through material balance, the time-lapse response is validated and the interpretation can be refined.

• New features simplify comparison of time-lapse attribute maps to pres-sure and saturation distributions estimated through reservoir simulation.

• Comparing the volume of the time-lapse anomalies to known production changes of saturation can greatly reduce uncertainty in the interpreta-tion process.

PRO4D provides a framework for the analysis and interpretation of time-lapse data sets. The interpreter has tools to model expected production- related changes, minimize non-production differences, and analyze the actual differences for further exploitation of the reservoir. It is also com-monly used to register seismic attribute volumes, which due to processing differences, have been subjected to time or spatially varying shifts.

Time-Lapse Attribute CorrelationSynthetic attributes calculated from Systematic Changes modeling are analyzed and correlated to their corresponding reservoir property values in a time-lapse attribute correlation toolkit.

• Relationships between the synthetic attributes and reservoir proper-ties are derived using multi-variate regression, neural network and cross-correlation techniques.

• The transforms generated in the correlation process are applied to the seismic data to produce maps of time-lapse reservoir properties such as saturation change, contact thickness, pressure and temperature.

Page 11: Software & Services

2010 Software and Services20 cggveritas.com/hampson-russell 21

PROMC Multicomponent Seismic Interpretation

Historically, it has been difficult to interpret PP and PS seismic volumes consistently. These difficulties are related to the different event times and frequencies on the PS data, together with differences in PP and PS reflectivity. Furthermore, the interpreter has many more seismic and attributes volumes to manage simultaneously than with traditional seismic interpretation. PROMC has been developed to directly address these challenges, creating an easy-to-use and intuitive work environment for the interpretation and analysis of post-stack multicomponent data.

ModelingAccurate interpretation of PP and PS data starts with an analysis of log in-formation, and the effects lithology and fluid changes have on the multiple wave mode responses.

• PROMC has the capability to create synthetic traces for all three wave modes.

• All modeling functions are linked to the currently active domain func-tion. Logs, seismic and synthetics are shown in a common domain (depth, PS, or PP time) and conversions are made based upon the well log velocity data.

• Advanced fluid substitution modeling is included to assess the impact of fluid saturation and lithology changes in the multiple wave modes.

• In addition to modeling “what if” scenarios, the synthetic traces are used to assist in the well log calibration process which links seismic events to their corresponding geologic markers.

• Synthetic traces are also used for wavelet extraction techniques that estimate PP and PS wavelets from the seismic and well data, leading to phase and frequency matching of the data set

Multicomponent AttributesMulticomponent seismic data has proven to be very useful as an aid in identification of reservoir petrophysical properties. Once the multicomponent data sets are aligned in a common domain, PROMC can extract important horizon or volume-based attributes from the data.

• PP and PS data volumes can be cross-correlated and cross-plotted to highlight anomalous areas. Cross-plotting and interpretation is greatly improved when the multiple data modes are filtered to contain similar bandwidth.

• Vp/VS ratio maps derived from the horizon times are a key attribute for interpretation.

• Interval Vp/Vs data can provide useful insights into the sand-to-shale ratio, lithology or fluid content between the specified horizons.

• The Vp/Vs ratio data obtained in PROMC is at lower frequency than is possible to obtain using AVO techniques. This low-frequency VP/Vs data may be an important constraint in AVO Rs determinations.

• Additional details about lithology and fluid saturation can be obtained through careful analysis of the P and S reflectivity variations.

Joint InversionThe purpose of simultaneously inverting PP and PS data is to produce estimates of P and S impedances and (optionally) density. Joint Inversion in PROMC includes a coupling between these variables, adding confidence that resulting impedance variations are indicative of differences in fluid or lithology characteristics.

• An initial low-frequency model is created from the extrapolated well logs, updated by the horizon matching performed in the domain conversion process.

• A model-based inversion then updates this initial model to create detailed models of P and S Impedance, Vp/Vs ratio and density volumes that are consistent with the PP and PS seismic volumes.

• These inverted results merge the high-frequency information from horizon matching with the higher frequency detail that comes from the multicomponent seismic amplitudes.

• Cross-plots of P and S impedance, or P impedance and Vp/Vs ratio help to classify lithology and fluid content.

• The added information obtained from the multicomponent data has proven beneficial in predicting lithology logs using the EMERGE process.

PROMC makes multicomponent interpretation and modeling highly accessible to the interpreter. It arranges all necessary functions into a coherent workflow and manages the enhanced data challenges that are inherent to the technology. The benefits of interpreting multicomponent seismic data are now available without the historic drawbacks that limited its full utility.

InterpretationDomain conversion is the cornerstone of interpreting multicomponent seismic volumes. Converting PS seismic data to PP time can greatly simplify the interpretation process. PROMC provides multiple solutions to fine-tune the correlation between PP and PS data sets in an intui-tive workflow, leading to accurate domain conversion.

• Imported velocity files or velocity tables can be utilized for the first pass at domain conversion, as well as a generic Vp/Vs ratio.

• The P and PS well logs are correlated to the seismic volumes recorded in each of those domains. The well velocities are then used to correctly define the relationship between the data sets and bring them into alignment at the well locations.

• Since the Vp/Vs ratio changes laterally, the velocity model produced through well ties is only accurate at the well locations. PROMC utilizes the picked horizons in each domain to guide the velocity model interpolation for structural or stratigraphic control.

• Horizon picking in PS data sets is facilitated by displaying the P-wave interpretation in each domain as a guide or using it as seed picks for auto tracking.

• After common events are picked on each data set, PROMC horizon matching insures the structural image of the converted S-wave data set is consistent with the P-wave structural image. At the same time, the velocity model is updated to remove the inconsistencies.

• Digitizing and matching fault planes in each data set can be more accurate in areas where event matching is difficult due to poor reflectivity correspondence.

Page 12: Software & Services

2010 Software and Services22 cggveritas.com/hampson-russell 23

VIEW 3D 3D Visualization

View3D is a new visualization tool which has been developed to enhance the display and interpretation of all products in the Hampson-Russell software suite. View3D is seamlessly integrated within a Hampson-Russell project, thereby allowing seismic volumes from any Hampson-Russell program to be quickly displayed as 3D objects. A wide range of display capabilities allows the user to visually enhance the target area to allow the interpretation of subtle anomalies.

View3D can access any seismic volume, well log, horizon or data slice within a Hampson-Russell project. Among the numerous display options are:

• Fence diagrams• Contour maps• Well bores with colored amplitudes• Opacity tools to enhance objects• Seismic volumes with wiggle display and color

View3D uses the Hampson-Russell project as its database for display. Each object within the project may be individually selected for vizualization.

View3D’s tight integration within a Hampson-Russell project ensures a useful consistency with other Hampson-Russell displays.

Training & Reservoir Services

Training Services

Public WorkshopsPublic workshops are organized by local Hampson-Russell offices. Workshops are scheduled throughout the year and are held in all our major centers. These workshops combine the underlying theory of the software with practical, hands-on exercises. Public workshops range from 1 to 5 days in duration, and are offered for the majority of our software packages. Scheduled workshop dates, locations and prices are listed on our website.

Private WorkshopsPrivate workshops can be arranged for individual companies at any location. The content and duration of these courses can be adapted as required. Our private workshops range from extended training programs for new users to exercises-only courses for users already well-versed in the theory. Private courses can be arranged for any of the following: AVO (including AFI), STRATA, EMERGE, PRO4D, GLI3D, PROMC, ISMAP and Geoview & eLOG. Many users benefit from combining a private workshop with project-based training.

Project-Based TrainingThe best of both worlds comes from the ability to learn the theory and application of the software on clients’ own data. Mentoring results in a deeper understanding of the concepts and an assurance that the software is being used to best advantage.

Reservoir ServicesHampson-Russell has the global expertise and experience through its consulting staff to handle any size of geophysical consultancy project, from the inversion of a single 2D seismic line to integrated reservoir characterization of the largest 3D survey. Our reservoir services are not limited to the technologies available within the Hampson-Russellsoftware suite. In addition to our Hampson-Russell software experts, we have expert consultants in other disciplines, including petrophysics and seismic interpretation.

Hampson-Russell can offer the following Integrated Reservoir Geophysics services:

AVO Analysis: AVO modeling AVO fluid inversion (AFI) AVO attribute generation AVO attribute analysisData Conditioning: Well log conditioning Seismic gather conditioning Rock property calibrationPetrophysical AnalysisSeismic Inversion: Post-stack inversion Prestack simultaneous inversion Prestack joint PP/PS inversionEMERGE Predictions: 3D volume prediction Prediction of missing logsSeismic InterpretationGeopressure predictionFracture analysis

Our staff can provide services either on-site ata client’s office or in our own offices worldwide.

Hampson-Russell software can import data from many sources. In addition to file reading of various types, Hampson-Russell has direct links to Landmark, GeoFrame and OpenSpirit. These links allow a Hampson-Russell project to be directly connected to a third party data store, removing data redundancy and allowing real-time updates. Landmark, GeoFrame and OpenSpirit connectivity is supplied automatically with all software (except GLI3D) where available.

Page 13: Software & Services

For further information about Hampson-Russell please contact your local office:

09

B-H

R-2

71

-V1

cggveritas.com/hampson-russell

Beijing, China +86 10 6437 4330 [email protected]

Calgary, Canada +1 403 266 3225 [email protected]

Dubai, U.A.E. +971 (0)4 391 3519 [email protected]

Houston, U.S.A. +1 800 561 5479 (U.S. & Canada only) +1 832 351 1188 [email protected]

Jakarta, Indonesia +62 (0)2 1252 2240 [email protected]

Kuala Lumpur, Malaysia +60 (0)3 2382 1100 [email protected]

London, UK +44 (0)20 8334 8830 [email protected]

Moscow, Russian Federation +7 (495) 789 8420 [email protected]

Mumbai, India +91 22 6703 1213 [email protected]

Perth, Australia +61 (0)8 9214 6240 [email protected]

Villahermosa, Mexico +52 99 33 104670 Ext 7017 [email protected]

Caracas, Venezuela +58 261 791 2113 [email protected]