3d modeling primer

149
A 3D Property Modeling Primer Table of Contents Acknowledgments Overview Part 1 – Statistical Concpts Part 2 – Modeling Concepts Part 3 – Modeling Methodologies Part 4 – Variograms Subject: Part 1 – Statistical Concepts Section: Introduction Chapter: Introduction Pages: - Introduction - Purpose of the Course - Questions this Course Will Answer Section: Basic Concepts and Terminology Chapter: Basic Concepts & Terminology Pages: - Basic Concepts & Teriminology - Some Simple Examples Chapter: Descriptive & Analytical Tools Pages: - Preface - Histograms - Typical Uses of Histograms - Univariate Statistics - Typical Display & Uses of Univariate Statistics - Crossplot - Crossplot Examples - H-Scatter Plots - Variogram Cloud - Variogram Clous Usage Example - Variograms - Why do We Need Variograms? - Variogram Maps - Using the Variogram Map

Upload: htun-hla-aung

Post on 10-Apr-2016

64 views

Category:

Documents


7 download

DESCRIPTION

mod

TRANSCRIPT

Page 1: 3D Modeling Primer

A 3D Property Modeling Primer Table of Contents

Acknowledgments Overview Part 1 – Statistical Concpts Part 2 – Modeling Concepts Part 3 – Modeling Methodologies Part 4 – Variograms Subject: Part 1 – Statistical Concepts Section: Introduction Chapter: Introduction Pages:

- Introduction - Purpose of the Course - Questions this Course Will Answer

Section: Basic Concepts and Terminology Chapter: Basic Concepts & Terminology Pages:

- Basic Concepts & Teriminology - Some Simple Examples

Chapter: Descriptive & Analytical Tools Pages:

- Preface - Histograms - Typical Uses of Histograms - Univariate Statistics - Typical Display & Uses of Univariate Statistics - Crossplot - Crossplot Examples - H-Scatter Plots - Variogram Cloud - Variogram Clous Usage Example - Variograms - Why do We Need Variograms? - Variogram Maps - Using the Variogram Map

Page 2: 3D Modeling Primer

Subject: Part 2 – Modeling Concepts Section: Modeling Geometry Chapter: 2D Grids Pages:

- Basic Components - Grid Nomenclature

- Grid Refinement - 2D Grid Examples

Chapter: 3D Grids – Property Models Pages:

- Components of a 3D Model - Example of a 3D Model

Chapter: 3D Grids – Reservoir Models Pages:

- Reservoir Models - 3D Simulation Grids - Structured Grids - Unstructured Grid Geometry (FloGrid)

Section: Data Transformations During Modeling Operations Chapter: Data Transformations

Pages: - Thresholding - Upscaling/Averaging - Segregation - Masking - Removing a Trend - Normal Score Transforms - Weighting

Subject: Part 3- Modeling Methodologies Section: Gridding Methods Chapter: Algorithm Classifications Pages:

- Introduction - How Are Gridding Algorithms Classified? - Kriging vs. Non-Kriging Algorithms - Algorithms for Discrete vs. Continuous Data - Deterministic Compared with Probabilistic Data

Chapter: Variogram Basics Pages:

- Variogram Roles in Gridding & Geostatistics - Facts to Remember About Variograms

Chapter: Terminology Used During Algorithm Descriptions Pages - Disclaimer

Page 3: 3D Modeling Primer

Chapter: Traditional Estimation Algorithms Pages:

- Mechanics of 3D Traditional Estimation Algorithms - A Survery of Traditional Methods

Chapter: Kriging Algorithms Pages:

- Kriging Workflow - The Mechanics of 3D Kriging - Difference Between the Search & Variogram Range

Chapter: Anisotropy Pages:

- What is Anisotropy? - An Example of Anisotropy

Chapter: Survey Pages:

- A Survey of Some Kriging Algorithms Chapter: Simulation Algorithms Pages:

- Simulation Algorithms - Sequential Gaussian Simulation – Continuous - Sequential Indicator Simulation – Discrete - Truncated Gaussian Simulation – Discrete - Object Modeling

Page 4: 3D Modeling Primer

Section: Gridding Operations

Chapter: Gridding Guidelines Pages:

- An Overview of the Property Modeling Workflow Chapter: Algorithm Selection Pages:

- How Do I Know Which Algorithms to Use With My Data?

- Selecting an Algorithm for Discrete or Facies Modeling - Selecting an Algorithm for Petrophysical Modeling

Chapter: Quality Control Procedures Pages:

- Quality Control During Modeling Chapter: Probability Options Pages:

- How Probable Are My Models & Volumes? - Selecting a Best Realization

Chapter: Comparative Table of Gridding Algorithms Pages:

- Table of Gridding Algorithms Section: Congratulations

Chapter: Congratulations Pages:

- Congratulations - Further Study

Subject: Part 4 – Variograms Section: Purpose of This Topic Section: Review Chapter: Review Pages:

- Review of basic facts – experimental variogram - Review of basic facts – variogram model - Review of Anisotropy - Review of Variogram Directions and Types

Section: The Big Picture

Page 5: 3D Modeling Primer

Chapter: Generic Workflows Pages

- How do you make a variogram? - How do you determine anisotropy? - How do you use variograms to determine layer distance?

Chapter: Petrel-Specific Variogram Facilities Pages

- Overview - Variogram facilities in an object’s Settings tab - Variogram facilities in the modeling dialogs - Variogram facilities in the Data Analysis tool

• Preparing for variogramming discrete data. • Preparing for variogramming continuous

properties Chapter: Petrel Interactive Tools and Icons for Making Variograms Pages

- Lag, Azimuth, and Search Angle Icon - Variogram Display

Chapter: Modeling the Variogram in the Petrel Data Analysis dialog Pages

- Simple Petrel procedure - Using the Variogram

Page 6: 3D Modeling Primer

Acknowledgements: This primer could not have been written without the help of and previous documents provided by: Sujit Kumar Doug Palkowsky Sanjay Paranji Leonid Shmaryan Lothar Schulte Drew Wharton Overview: This subject is too large for a single computer-based training model, and is therefore subdivided into four major topics – Statistical Concepts, Modeling and Geometrical Concepts, Modeling Methodologies, and Variograms. While variograms are discussed in general in the first three topics, the last topic provides an independent summary as well as detailed instructions and workflows which are Petrel-specific.

Page 7: 3D Modeling Primer

Subject: Part 1 – Statistical Concepts Section: Introduction

Chapter: Introduction Page title: Introduction

Ever read a book about geostatistics?

Glasses Can Help

Page 8: 3D Modeling Primer

Page title: Purpose of the Course The three primary purposes of this course are: 1. Provide the student with a common sense generic background in geostatistical concepts

• Use simple terminology, not mathematical notation • Use plain language, common sense analogies • Focus on mechanics of how to reach specific goals, not on proving or

demonstrating theorems • Recommend texts for deeper understanding to lead to creativity and true expertise

2. Provide the student with the vocabulary to become quickly productive with the tools in any geostatistical software, including Modeling Office, Petrel, FloGrid, LPM, …

PRODUCTIVE MEANS BEING ABLE TO DO THESE THINGS : • Understand the variety of statistical tools available for data analysis before and

after modeling. • Determine if there is a relationship between a property and a seismic attribute. • Understand the definition of a variogram and its uses in the grand scheme of

things. • Understand the variety of data transforms used in geostatistics and modeling. • Determine if your property values are directionally biased. • Visualize the grid geometries used in modeling from 2D gridding to simulation • Tell the difference between kriging and non-kriging algorithms • Tell the difference between deterministic and probabilistic algorithms • Understand the difference between facies modeling and petrophysical property

modeling • Understand the use of stochastic methods. • Learn how to use seismic attribute grids as secondary input data • Measure the QUALITY of geostatistical models by comparing statistics

3. Provide specific recommendations for modeling lithology and properties under various conditions

• Choose appropriate kriging and non-kriging estimation algorithms for modeling based on data and reservoir characteristics.

How?

What do we mean by productive?

Page 9: 3D Modeling Primer

Page Title: Questions This Course Will Answer

• What’s the advantage of using GeoStatistics?

• Why are there so many kinds of kriging algorithms?

• What’s the difference between kriging and non-kriging algorithms?

• What are “stochastic realizations”?

• What does a variogram actually show?

• Is a variogram actually necessary?

• Why do I have to make a vertical and a horizontal variogram?

• What is the meaning of “anisotropy”?

• What do I do if my data set has “anisotropy”?

• How do I know if my computed model is OK?

• Is there a recommended geostatistical modeling workflow for each property like

lithology, permeability, etc?

• How can geostatistical algorithms help in the determination of probabilities?

Here are some specific questions you’ll get answered in this course

Now that you know what this course is about, we’ll move along to the first topic – Basic Concepts - Good Luck !

Page 10: 3D Modeling Primer

Subject: Part 1 – Statistical Concepts Section: Basic Concepts and Terminology

Chapter: Basic Concepts and Terminology Page Title: Basic Concepts and Terminology

To make progress, we all need the right words. Here are a few of the more important ones we’ll need for learning about geostatistics.

• Variance - a measurement of how different the members of a collection are from each other. (Measured in units of the collection). [Q 1]

• Correlation - a way to measure whether two separate collections are related.

(Measured in percent). [Q 2]

• Anisotropy - a way to measure whether variance within a collection of data is determined by direction. (Measured in azimuth and percent eccentricity). [Q 3]

• Probability – a measurement of the likelihood of an event. (Measured in percent).

[Q 4]

Stationarity –is simply an ASSUMPTION which is made regarding the rules for behavior of the properties which we analyze, study, or model with geostatistical tools. The rule is simply that the property must behave consistently within the volume chosen for analysis, study, or modeling. If it does not, then the geostatistical tools which we use will not work properly. Stationarity assumes that a property behaves the same way in all locations of the chosen volume; i.e., that the samples have no inherent trend. If a trend exists, it must be removed before using certain algorithms.

“ He used so many five and ten-dollar words in his lecture that now I’m completely broke...”

- Mathematics student

“ I must say that words simply cannot describe what happened.” - Hairless caveman upon discovering fire

Some simple concepts used in geostatistics

A complicated concept used in geostatistics

Page 11: 3D Modeling Primer

Questions for review:

1. Which concept is used to quantify whether an event has any chance of happening?

[probability] 2. Which concept is concerned with simple differences between samples? [variance] 3. Which concept gives a way of comparing two different attributes?

[correlation] 4. Which concept is concerned with direction? [anisotropy]

Page 12: 3D Modeling Primer

Page Title: Some Simple Examples Consider these simple examples: Variance “Samples of porosity in a simple sandstone unit will show much less variance than samples measured in a unit containing several sands and a shale”. Correlation “Values of some seismic attributes can show a strong correlation with values of certain petrophysical properties”. Anisotropic “My porosity data set is anisotropic because measurements towards the Northeast vary much more than in other directions”. Probability “I would like to see those locations in my saturation model where there is a 70 percent probability that values will be greater than 0.6”. Stationarity “Porosities in the same geological unit, but in different fluvial depositional components may not exhibit stationarity as a group because the different particle sorting mechanisms at work may cause the variance of the property in one facies to behave in an entirely different fashion than in another facies” It is for this very reason that facies should be modeled first, then petrophysical properties modeled within facies.

Page 13: 3D Modeling Primer

Subject: Part 1 – Statistical Concepts Section: Basic Concepts and Terminology

Chapter: Descriptive and Analytical Tools Page Title: Preface

Geostatistical data analysis tools rely on the traditional definitions of statistics, and give us a way to measure, describe, and compare certain characteristics of our raw data and the resulting models. Below you can see a display of the most common of these tools and the way in which each presents its measurements to the user. We will discuss each of these tools in more detail.

H i s t o g r a m s U n i v a r i a t e S t a t i s t i c s C r o s s p l o t s

V a r i o g r a m s V a r i o g r a m C l o u d V a r i o g r a m M a p

Page 14: 3D Modeling Primer

Page Title: Histograms

Below, you see an example of a histogram. This histogram depicts the distribution of all Porosity values of all well logs which fall in the Ness geological unit. What does this histogram tell us? [Q 1]

1. Each red-colored column represents a “class” (a range of values). Example: Class 1 has a range of 0.0 to 0.04, Class 2 has a range of 0.04 to 0.08, etc. [X –axis] [q 2]

2. The height of each column shows the number of points whose values fall in the

range of the class [Y-axis] [q 3]

3. The overall shape of the histogram shows how the data may be “grouped” Example: The group of points in the first class are smaller and appear to be independent of the rest of the data, suggesting that they could be excluded from the higher, more useful values of this attribute.

Note the components of a histogram

Page 15: 3D Modeling Primer

Questions for review:

1. A histogram shows us a __________________ _________________ of the data. [frequency distribution]

2. Each red column in the histogram example represents a _____________

of data. [range] or [class]

3. The height of each red column in the histogram example shows 4. [how many points fall in the class]

Page 16: 3D Modeling Primer

Page Title: Typical Uses of Histograms [q 1]

1. For cleaning up of log data

a. look at the histogram of the original log data b. if you note any clumping of data at either end of the histogram, as in the very low values in the one above, you might want to remove and treat this portion of the data differently, since it could represent a different facies.

2. For quality control after lumping or up-scaling of well logs.

a. look at the histogram of well logs for the property you will be mapping, such as porosity b. lump the data or upscale the logs c. look at the histogram of the lumped or upscaled logs; it should have the

same characteristics as the original data. If not, the lumping or upscaling operation did not preserve the character of the data

3. For quality control after modeling

a. using the histogram of the lumped, or upscaled data as the criteria, make sure that the histogram of the model (3d grid) maintains the same character b. if not, a different algorithm might need to be chosen

Questions for review:

1. Name one good use of histograms. [clean up of log data], [qc after upscaling], [qc after modeling]

Page 17: 3D Modeling Primer

Page Title: Univariate Statistics This set of measurements is simply a way of describing a particular data set with a series of measurements which is unique to a single set of data which is assumed to represent values for one variable; hence- “univariate”. Measurements include:

• Measures of Size and Location such as

o Number of points, number of null values o Minimum value, maximum value, mean value, median value, etc.

• Measures of Distribution Spread such as

o IQR (inner quartile range)

This number shows the range of the middle 50% of the values.

• variance

This number represents a measure of how different the data are; in particular, it measures the probability that a data point will deviate from the mean in this particular data set. We’ll learn more about variance in a spatial context when we get to variograms [q 2]

• standard deviation

This number is actually just the square root of the variance, and also is used when describing how the distribution of data points vary from the mean.

• Measures of Distribution Shape such as

skew which measures how much data distribution deviates to the right or left of a normal distribution: [q 1]

Page 18: 3D Modeling Primer

Questions for Review:

1. Skew is a measurement of Size and Location T/F (F) 2. Variance is a measure of how different the data values are. T/F (T)

Page 19: 3D Modeling Primer

Page Title: Typical Display and Uses of Univariate Statistics

A typical display of univariate statistics in report format might look like this: [q 1] Typical uses of univariate Statistics

1. It is not uncommon to use the univariate statistics of a data set as its “signature”. When voluminous data sets must be managed and manipulated, naming conventions are sometimes forgotten, but the data signature inherent in these measurements can be used to identify a particular data set unambiguously. [q 2]

2. In geostatistical modeling, monitoring the univariate statistics of a particular data

set as it goes through this transformation or another lets you determine if the transformation went according to plan or not

Page 20: 3D Modeling Primer

Questions for review:

1. Univariate statistics are usually presented in what format? (graphic) 2. Taken in combination, univariate statistics can be a unique ____ of a data set. (signature)

Page 21: 3D Modeling Primer

Page Title: Crossplot What does a Crossplot show?

• Specifically, it displays the values of two variables measured at the same location [q 1]

o vertical axis is first variable, horizontal axis is second.

• Reveals the degree of correlation between the two variables by the shape of the

data cloud. [q 2]

• Look for the cloud of points to form a shape, or even a line to indicate significant levels of correlation. [q 3]

Questions for Review:

1. How many variables are depicted in a simple 2D crossplot? (2)

2. Specifically, what does the crossplot reveal? (Degree of correlation) 3. What do you look for in the crossplot which indicates a strong relationship is

present between two variables? (Points form a shape)

Very strong postitive Strong negative No correlation

When one variable increases, the other increases

When one variable increases, the other decreases

Page 22: 3D Modeling Primer

Page Title: Crossplot Examples

In general, we can use crossplots to see if there is a significant relationship between two or three variables, especially when the primary variable is under-sampled in some of the areas where you want to map it. For example, assume the following: Saturation logs exist only in the North of the reservoir we wish to map. Through study of the seismic attributes, we have an idea that the seismic attribute reflection strength is related to saturation in a particular zone. If, indeed, we can show that saturation and reflection strength are strongly related, then in those areas of the reservoir where we do not have any data for saturation, we use reflection strength as a surrogate value, completing the map in all locations. Gather the data A requirement for making a crossplot is that we have a reasonably large set of pairs of values where both target variables have been measured at the same geographic location. For example, a grid of saturation values and another grid of reflection strength could be used, as long as the two grid geometries are identical. Alternatively a well log of saturation values and another synthetic log of reflection strength from the seismic volume would work as well. In making the crossplot, one of the variables is called the primary , and the other is called the secondary. Typically, the secondary will be the surrogate attribute. Make the crossplot When the two attributes are plotted on the crossplot, in Petrel, for example, look for a significant relationship, as defined by a focusing of points along some narrow shape, either curved or straight, preferably thin and narrow. We go through this exercise with the data making the following assumption:

The primary data value at one location can be predicted (calculated) from a single value of the secondary variable at the same location.

Formulate the relationship It is the job of the crossplot to show IF a relationship exists, and how strong it is. If a relationship is found, then the next step is to formulate the mathematics of the correlation. Essentially, we must know how to calculate a reasonable value of the primary variable at some location from a value of the secondary variable at the same location – what is the formula? Software typically takes care of this, for example, Petrel or LPM . [q 1]

Page 23: 3D Modeling Primer

Use the relationship in modeling The final step is to make use of both the primary and the secondary data sets and the relationship between them in a modeling operation which will produce the desired result. The desired result is that the primary attribute will be defined over its original area as well as the area covered by the secondary variable. When Petrel is used for the modeling operation, many of the steps involved are automated. Note that you must choose an algorithm which actually does allow the input of a second data set. [q 2] Questions for review: 1. When a relationship is formulated, what kind of function is created? (mathematical) 2. Modeling with correlated data sets requires you to choose an algorithm which

allows __________. (a secondary data set)

Page 24: 3D Modeling Primer

Page Title: Variance Cloud

• This is actually a first step in the creation of a variogram. [q 1] • Finds all point pairs for each distance (lag) classification • Computes the variance for each point pair • Shows the variance (how different the points are) for each pair within

each classification to help explain the final variogram shape • The final variogram is simply the average of the variance for each

distance class. [q 2] The distance class ranges are not shown here. Questions for Review:

1. What step is the variogram cloud in the sequence of variogram creation? [first step} 2. The final variogram shows the ___________________ of the variances in each of the distances classes. [average]

.804

.603

.402

.201

Variance

8.25 16.4 24.6 32.8 Separation

V a r i a n c e c l o u d

Page 25: 3D Modeling Primer

Page Title: Variance Clouds Usage Example The variance cloud as depicted above cannot be generated in Schlumberger software at the moment; however, Petrel offers a variation of this which is very useful. In the diagram below from Petrel, we see that a histogram of the variance cloud has been superimposed over the final experimental variogram. [q 1]

The advantage of this display is that it gives a relative “weight” to each experimental variogram point by showing you how many point pairs were used in its calculation. In those cases where only one or two pairs contributed to a final variogram value, then that value might be ignored when fitting the model to the variogram. [q 2] The histogram is useful, but being able to see the results of the specific pairs of points is even more useful, especially if there is any facility for identifying actual well names, which might even lead to the correction of errant logs. Questions for Review: 1. Petrel cannot display a variogram cloud, but can display a____________ based on the variogram cloud values. [histogram] 2. Does the information displayed by Petrel allow the user to see the number of point pairs used to compute the final values, or the actual variance value for each pair? [number]

Page 26: 3D Modeling Primer

Page Title: Variograms

• The input to a variogram is a set of data points from which a model is to be computed - for example, a set of porosity well logs. Variograms should be made from the original or thresholded logs, not the averaged or upscaled logs.

• A variogram is a plot of variance (Y) versus distance class (X) • Variance measures how different a set of points are, one from the other. • Based on how far apart two points are, a variogram will show how different they

can be expected to be [q 3] • A basic assumption in geostatistics is that the closer two points are, the more

similar their values will be • The variogram measures unique characteristics about a data set’s variance, such

as its Range, Sill, and Nugget, which are defined below. . How is a variogram computed?

• First, find all point pairs in all distance classes (lags). For example, find all those pairs of points which are 10 meters apart, put them in a bin, then find all pairs of points which are 50 meters apart, put them in another bin, etc…

• For each bin, compute the variance for each pair of points in the bin (class). • Now, average all the variance values into one number for each bin. • Plot the average variance for each class as depicted by the black points below.

These points make up what is called the “experimental variogram”. • Let the user fit a curve thru the experimental variogram to create variogram

shape, as depicted by the simple curve in the diagram below.

Range

Sill

[q 1] [q 2]

Nugget = y-intercept, if any

Page 27: 3D Modeling Primer

The range, sill, and nugget are the three most revealing characteristics of the variogram. The Range shows the distance where spatial relationships between data points cease. In other words, two data points which are further apart than the Range have only a random relationship. The Sill is the variance at the Range. A non-zero Nugget indicates that there are very close data points which are not very similar. Questions for Review: 1. The value on the X-axis of a variogram represent [distance class or separation distance] 2. The values on the Y-axis of a variogram represent [variance or semivariance] 3. Variance measures how _____________ points are. [different]

Page 28: 3D Modeling Primer

Page Title: Why do we need variograms? Reason 1 – Variograms are required for many algorithms [q 1] If you choose a Kriging algorithm, or certain other geostatistical algorithms for modeling, you must have a variogram. In most cases, a default variogram will be available, making it unnecessary to actually create and shape the variogram, but still, a variogram is required for many of the algorithms because the variogram becomes the primary weight function during modeling. Reason 2 – To determine the natural heterogeneity or inherent granularity of the data in the vertical direction [q 1] It turns out that the Range of the Vertical variogram is a good candidate for the layering increment within a particular zone. We want a layer thickness which will allow differences in facies to be seen. Reason 3 – To determine if there is anisotropy in the horizontal direction [q 1] Tools are available during the creation of the horizontal variogram which allow you to establish if anisotropy exists and the measure it. Reason 4 – To have another quality control measurement for comparison before and after modeling operations [q 1] As you analyze, edit, threshold, and model your data, the variogram just like the histogram is an excellent tool for making sure that the characteristics of the data are preserved after each modeling operation. For example, the variogram of the upscaled well logs and the variogram of the 3D model should be similar, if you are to assume that the proper algorithm was used.

Layering is Vertical cell size in the 3D model

Top

Base

Page 29: 3D Modeling Primer

Questions for Review: 1. Name two reasons to make a variogram. [It’s required to determine vertical heterogeneity to determine horizontal anisotropy to verify QC measurements]

Page 30: 3D Modeling Primer

Page Title: Variogram Maps

• Uses the same information as computed in a variogram • Reorganizes the point pairs in a geographic 2D sense by E/W and N/S separation

distance • Produces a contour of the 2D variance surface for unambiguous detection of the

direction and extent of anisotropy. If anisotropy exists, then you will see oval-shaped contours whose high or low will be in the center of the map. Think of the oval contours as sets of small to large-sized ellipses whose major axis shows the major direction of anisotropy.

• The map is symmetrical and reversed on either side of the major axis of anisotropy as in the example below: [q 1]

Questions for review: 1. What strong geometric characteristic does a variogram map have? [Symmetry]

Page 31: 3D Modeling Primer

Page Title: Using the Variogram Map

The variogram map provides a highly automated way to determine whether or not a data set has anisotropy or not. [q 1] It performs several single variogram functions at one time. The anisotropic direction, if present, can be seen quickly and clearly measured. If available, it should be the first thing done when investigating horizonal variograms, since so much information can be revealed at once. [q 2]

Review Questions

1. Is the variogram map a more automated or less automated way to discover anisotropy? [more]

2. If available, a variogram map should be the Initial or Final operation in making the horizontal variogram? [Initial]

Page 32: 3D Modeling Primer
Page 33: 3D Modeling Primer

Subject: Part 2- Modeling Concepts Section: Modeling Geometry

Chapter: 2D Grids Page Title: What are the components of a 2D Faulted Model?

The components of a 2D Faulted model are: 1. Grid [q 1]

2. Fault Traces [q 1] 3. Interpolator

The interpolator is a grid-based algorithm which can compute the value of the grid at any arbitrary location . It is this software interpolator which allows you to think of the grid as continuous and existing between the grid nodes, even in the interpolator zone.

• 2D grids can be refined • Z-values are located at the nodes (intersection of row/columns lines) [q 2] • Grid geometry is everywhere perpendicular [q 4] • Number of grid nodes across any fault face depends on the width of the fault

zone. [q 3]

Upthrown fault trace

Downthrown fault trace

Fault Face

Extrapolation zone

Single-valued grid

nodes

Downthrown horizon block

Page 34: 3D Modeling Primer

Review Questions 1. What are the 2 real components of a 2D model? (Grid and Faults) 2. Where are the grid values located in a 2D model? (Grid nodes) 3. The number of grid nodes across any fault face in the 2D grid is determined by? (Width of fault zone) 4. The geometry of 2D grids is everywhere _____________? (Perpendicular)

Page 35: 3D Modeling Primer

Page Title: Grid Nomenclature

Once the grid node values are computed, the data points are redundant. The 2D model consists of the grid and the faults. The 2D grid is a collection of X, Y, and Z (elevation) points, but only one Z can be stored for one X, Y location. [q 2] Row/column numbering is shown with respect to CPS-3 conventions.

Xmin

Ymin

C o l u m n 1 2 3 4..... [q 3,4]

Row 1 2 3 . . .

Ymax

Xmax

One grid cell [q 5] Small cells mean high resolution Large cells mean low resolution

One grid node Nodes area at the intersections of the lines . Values are computed for each node, if possible, from surrounding data points

Fault Trace [q 1] Used to segregate data points

Data Points

Page 36: 3D Modeling Primer

Questions for review:

1. Fault traces are used to ___________________ data points. (segregate) 2. How many Z-values can be stored for one X, Y location in a 2D grid? (only 1) 3. Columns are numbered from ______ to ______ in CPS-3 convention. (left, right) 4. Rows are numbered from ______ to ______ in CPS-3 convention (top, bottom) 5. Smaller grid cells cause a/an ____________in model resolution. (increase)

Page 37: 3D Modeling Primer

Page Title: Grid Refinement

Grid refinement is typically a 2D operation, with 3D modeling systems tending towards cellular models, where only one value per cell is allowed, and cells are unable to be refined. It is grid refinement which allows a 2D grid-based model to be thought of as existing everywhere, even though grid values exist only at row/column intersections. As evidenced by contours, which are traced across all locations of the model, a 2D grid is always thought of as a continuous surface, rather than a collection of cellular blocks. Refinement typically occurs in discrete multiples, i.e., refinement by two, by three, etc.. Below is a depiction of an original 5 column by 6 row grid which was refined “by three”, i.e. each cell was subdivided into three sub-cells. The new grid is composed of both the dark lines and the dotted lines and has 13 columns and 16 rows. The extra cells are computed only from the original cells. Original data is not used. The extra cells give the model a smoother, and in many cases, a more accurate quality up to a point. [q 1, 2, 3].

ROW

COLUMN 1 2 3 4 5

1 2 3 4 5 6

Page 38: 3D Modeling Primer

Questions for Review

1. Refinement allows the 2D model to be thought of as existing ___________. (everywhere)

2. A refined grid is _______________ than its original version (smoother) 3. 3D grids typically cannot be refined (T/F). (T)

Page 39: 3D Modeling Primer

Page Title: 2D Grid Examples In this view of a 2D grid displayed in a 3D volume, the continuity and refinement are still apparent. Likewise, in this traditional manifestation of the 2D grid through contouring, we see that the lateral variation of 2D grids can be broken into many levels by the use of refinement, which, in many 3D systems is not available.

Page 40: 3D Modeling Primer

Below is another figure which shows how the light blue lattice of the 2D grid cells are typically refined for purposes of contouring, in contrast to the geo-cellular 3D grid below it, whose cells are typically not refined during display. They are not refined since the concept is that only a single geo-cellular value exists at the center of each cell, whereas the values in the 2D grid are assumed to exist at the corners.

Page 41: 3D Modeling Primer

Subject: Part 2- Modeling Concepts

Section: Modeling Geometry Chapter: 3D Grids – Property Models

Page Title: Components of a 3D Model What are the components of a 3D geo-cellular Model?

1. Grid Cells and Values 2. Fault Blocks (Fault “Segments” in Petrel) 3. Units (“Zones” in Petrel) 4. Layers

Typically, the cells cannot be refined and so the grid-based interpolator used by the 2D grid geometries is not needed. Also the fault blocks are pre-defined in 3D models so that fault traces and grid-based interpolation is not needed for those reasons, too. [q 5] � Grids are geometrically partitioned by Fault Block, as well as by Zones (Units) [q 2] � Zones are subdivided by Layers [q 3] � Grid has X, Y, Z, A components, where A is the attribute being modeled. � Originating 2D structure grids provide the initial geometry [q 1] � Cells are still basically rectilinear in X,Y, although truncated by the faults � Sides of cells are typically orthogonal except at faults � 3D grids do not allow refinement � 3D grids do not require fault traces � A single value is positioned at the center of each cell [q 4]

Unit or Zone

Block 1 .

Block 2

Layers

Cell value in cen ter

Page 42: 3D Modeling Primer

Review Questions

1. The initial geometry for 3D grids is typically provided by __________. (2D structure grid)

2. 3D grids are partitioned by __________ and _________. (fault block and zone) 3. 3D grid zones are subdivided into _______. (layers) 4. Values for 3D grids are typically located at __________________. (the center of

the cells). 5. Why are fault traces not needed for 3D models? (fault blocks are predefined)

Page 43: 3D Modeling Primer

Page Title: Example of 3D model An example of a 3D facies grid showing the inherent granularity of 3D grids compared to 2D grids.

Page 44: 3D Modeling Primer

Subject: Part 2- Modeling Concepts Section: Modeling Geometry

Chapter: 3D Grids – Reservoir Models Page Title: Reservoir Models Components are basically the same as for 3D Property grids. The biggest difference is the orientation and shape of the cells whose purpose now is to not only honor the structure, but the flow within it. [q 1,2] • Faces of cells oriented along faults, perpendicular to FLOW [q 3] • Faces of cells also oriented radially around wells, when needed. • 2D structure grids provide basic geometry • Model is organized by fault blocks • Attribute value in center of cell • Zones or Units are subdivided into layers • Sides of cells may or may not be vertical • No fault traces required • No refinement

Today, a 200,000-cell grid is an average grid for simulation. A 500,000-cell grid is considered large . [q 4]

A 200K-cell grid is a cube 60 cells on a side.

3D Flow

Vector

Layers

Page 45: 3D Modeling Primer

Review Questions

1. The purpose of the simulation grid is to model ________. (flow) 2. One major difference between simulation grids and property grids is the

______________________ and ____________________ of cells. (orientation and shape)

3. Faces of cells in a simulation grid are ideally oriented _____________ to the fluid flow. (perpendicular or normal)

4. Today, an average sized simulation grid has about _____________cells. (200,000).

Page 46: 3D Modeling Primer

Page Title: 3D Simulation Grids There are two basic types of simulation grids, structured and unstructured. [q 1] Structured grids are the most commonly used; unstructured grids are used only occasionally.

Structured grid geometry (FloGrid): • Cell edges are vertical, but faults can be sloped • Grid is distorted in X,Y to honor faults • Tops and bases follow geological model • Cells are typically 6-sided • Viewed in plan view, grid is relatively homogeneous wrt cell size • Structured grids have a fixed number of rows, columns, and layers. [q 2]

Questions for Review:

1. What are the two basic types of simulator grids? (structured and unstructured) 2. Structured grids have a _________number of rows, columns, and layers. (fixed)

Page 47: 3D Modeling Primer

Page Title: Structured Grids Two types of Structured Grids 1. Block Center

• Flow connections are center to center [q 1]

• Cells are always rectilinear and orthogonal

• No information about the geometry of neighbor cells

• FloGrid does not use this. 2. Corner Point

• Flow connections are face to face [q2]

• Geometry of neighbor cells are known

Questions for Review:

1. In Block Center grids, flow is assumed to be from ________ to ________. (center to center)

2. In Corner Point grids, flow is assumed to be from ________ to _______ . (face to face)

Page 48: 3D Modeling Primer

Page Title: Unstructured Grid Geometry (FloGrid) Unstructured grids can model flow more accurately, especially relative to a particular object or feature, but are time-consuming to define and use. Unstructured Grid Geometry (FloGrid):

• Cell edges are generally vertical, but faults can be sloped • In plan view, cells may not be homogeneous with respect to shape. (dense in

some places, sparse in others) [q 1] • Honors flow orthogonality not only at faults, but at wells. • Use is not as common as structured grids.

Review Questions

1. When viewed in plan view, an unstructured grid cells may not be ________________ in shape. (homogeneous, consistent)

Page 49: 3D Modeling Primer

Subject: Part 2- Modeling Concepts Section: Data Transformations During Modeling Operations

Chapter: Data Transformations

Page Title: Thresholding Transform Name – THRESHOLDING Description: Remove/ignore borehole values below specified Min/Max Cutoffs [q 1] Workflow:

• Load boreholes • Use previously decided cutoffs or verify cutoffs with histogram • Use thresholding tool to reset the Min and Max

Question for review: 1. Thresholding will _________ values along a borehole along some specified limit. (remove or ignore)

Page 50: 3D Modeling Primer

Page Title: Upscaling Transform Name – UPSCALING or AVERAGING by CELL Description: Average the borehole values falling in the same cell [q 1] Workflow

• Define number of layers in unit (correlation scheme) • Load boreholes • Pick best averaging technique for the original or thresholded data • Upscale the data, specifying the averaging method. • Each cell contains one averaged value, while there may be hundreds of values

along the borehole.

Red circle represents upscaled data, one point per cell (horizontal subdivision of a layer)

Page 51: 3D Modeling Primer

Upscaled well logs from Petrel Question for review:

1. Upscaling averages borehole values which fall in the same_______. (cell)

Page 52: 3D Modeling Primer

Page Title: Segregation

Transform Name – SEGREGATION Description: For model quality, data is segregated by fault block so that grid values computed in one block will use data only from that block. [q 1] Another example of segregation is during Sequential Gaussian Simulation in Petrel, were the input data such as porosity logs are optionally segregated by lithology. Workflow

• This is an option selected during Population (gridding)

Page 53: 3D Modeling Primer

Questions for Review:

1. Data is segregated by fault block to control the ___________ of the model. (quality)

2. Porosity data can be segregated by lithology because porosity may behave differently in different facies T/F. (T)

Page 54: 3D Modeling Primer

Page Title: Masking Transform Name – MASKING Description: Discrete grid values from one grid can define a location template to determine where to populate an output grid. For example, during Sequential Gaussian Simulation in Petrel, an existing facies grid can be used as a look-up table, so to speak, during petrophysical modeling to tell the algorithm when it is gridding at a “channel” location, or a “fine sandstone” location, etc. Workflow

• Create a masking grid with discrete values at those locations where you want an attribute computed, for example, at particular facies locations. [q 1]

• Pick the masking option when specifying output grid during population, or when performing the grid to grid operation. Questions for Review: 1. Masking can be used to control the __________ where certain operations are performed. (location).

Page 55: 3D Modeling Primer

Page Title: Removing a Trend Transform Name – REMOVING A TREND Description: As you will learn, Kriging algorithms make certain assumptions about their input data sets. In particular, the assumption of “stationarity” (another expensive word which, for our purposes can just mean “behaving everywhere in the same manner”) [q 1] insists that there be no inherent trend exhibited by a data set if it is to be Kriged properly. Thus, one of the most common data transforms to perform in 3D modeling is to remove the trend from data sets before making the final variograms and modeling with a Kriging algorithm. The workflow is shown below. Workflow

• Before Kriging any data set, determine if it has a “trend” by either inspecting its histogram or variogram for horizontal convergence to a sill, or by some other means. [q 2]

• If a trend is identified, then remove the trend using the appropriate tool within the software, krig the data, and add the trend back into the result. Note that this operation is not to be confused with attemping to make use of an external trend during modeling

Trend removal is also discussed more in detail in the section which describes Kriging algorithms.

Determine if a trend exists in the data Remove the

trend

Make the final variogram

Krig the data

Add the trend back into the result

Page 56: 3D Modeling Primer

Do not confuse the removal of a trend from a data set with various Kriging algorithms such as “Kriging with an External Trend”. They are separate concepts. [q 3] Questions for Review:

1. Removing a trend from a data set ensures that ______________ is maintained (stationarity).

2. You can use either a _____________or a ______________ to determine if a data set has an inherent trend.

3. Kriging with an External Trend is the best way to remove a trend from a data set. (T/F) (F)

Page 57: 3D Modeling Primer

Page Title: Normal Score Transform Transform Name – NORMAL SCORE TRANSFORM Description: Some Kriging algorithms require your data to be in a normal distribution before the algorithm will work properly. An example is Sequential Gaussian Simulation. Petrel provides a simple way of doing this for you and will automatically perform the inverse transform on the output. A data set which has been transformed to a normal distribution will have a mean of 0.0 and a standard deviation of 1.0. It is easy to identify data such as this with a simple histogram. The first histogram below shows data in its raw state, the second shows the same data after being transformed to normal score format. [q 1, q 2, q 3] Questions for Review:

1. The mean of data which is in Normal Score format is equal to ____. (0.0) 2. An example of a kriging algorithm which requires the data to be in a normal

distribution is __________________(SGS)

Page 58: 3D Modeling Primer

Page Title: Weighting Transform Name – WEIGHTING In modeling, individual data points are typically weighted (given more or less importance) during many calculations. Description: Input data collected during gridding or population operations is typically weighted by distance before it is used to calculate a value for the grid node. The weighting scheme ensures that data points which are closer to the grid node being

200 lbs.200 lbs.200 lbs.200 lbs.

0 lbs.

199.9 lbs.

WWWWeight as a eight as a eight as a eight as a

function of function of function of function of

height height height height

above the above the above the above the

earthearthearthearth

Page 59: 3D Modeling Primer

computed have more weight than far points. Close points are assumed to be more representative of the attribute being mapped than points which are further away. Attribute values of none of the points are changed, they are simply given more or less importance, based on their distance from the node. [q 1, q 2, q 3] As you can tell by the shape of the weight curve, most of the weight is given to very close points, and little weight is given to further points.

HIGH WEIGHT

LOW WEIGHT

CLOSE POINTS FAR POINTS

. .

Close Points Have Higher Weights

Further points have Lower weights

Grid node to be computed

Data points

Distance to data point determines weight.

Page 60: 3D Modeling Primer

Questions for Review:

1. Points used in the calculation of grid values are typically weighted as a function of ____________ (distance).

2. Far points have _______weights, near points are assigned ________weights. (low, high).

3. When weighting data points, the algorithms raise or lower the original value, based on the weight assigned. T/F (F).

Page 61: 3D Modeling Primer
Page 62: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Algorithm Classifications Page Title: Introduction

In this section, we’ll talk about the geostatistical algorithms which compute 3D facies and property grids. Gridding is sometimes referred to as “population”. “Gridding”, or “population” is the process by which randomly-spaced data samples of some property are transformed into a complete model in the form of an organized 3d grid lattice.

Well Logs 2d grids 3d grids Constants

algorithm

model

Input Data

Page 63: 3D Modeling Primer

Page Title: How Are Gridding Algorithms Classified? � Geostatistical algorithms vs. Traditional algorithms:

Most of the algorithms we’ll be discussing here fall in the category of geostatistical algorithms, including Kriging and Simulation. These algorithms include standard statistical techniques in their operation. There are also non-geostatistical algorithms available such as Nearest Neighbor, Distance To Nearest Neighbor, Inverse Distance, and Inverse-Distance Cogridding.

� Kriging algorithms vs. Non-kriging algorithms Kriging is one type of geostatistical algorithm which has many variants. The traditional algorithms listed above are typically non-kriging algorithms. We will see what this means later. [q 1]

� Continuous vs. discrete properties

Some algorithms are designed specifically for discrete, tabular data values, such a rock type, facies or lithology codes. Other algorithms are designed for continuous data such as porosity. [q 2]

� Estimation (Deterministic) algorithms vs. Simulation (Probabilistic) algorithms

The mechanics of Estimation algorithms are totally different than Simulation algorithms. Deterministic algorithms try to create a model which follows the data literally, while the Probabilistic algorithms create a model which if faithful to the statistical characteristics of your data. One of the biggest decisions to make is to choose which one to use for a particular property or facies. [q 3]

� Single or multiple data set

Many algorithms will allow the specification of a secondary, correlated data set. This is particularly useful when the primary data set is sparse and correlated data exists which covers a larger area. An example of this would be the situation where you have very few wells for your primary data, but you have one or more seismic attributes which can be correlated with the property you wish to model. [q 4]

Review Questions:

1. Traditional algorithms are typically non-kriging algorithms. T/F (T). 2. Facies logs would be considered to be __________data. (discrete) 3. Probabilistic algorithms are synonymous with _______ algorithms (simulation). 4. Some algorithms allow you to use more than one _________ . (data set)

Page 64: 3D Modeling Primer

Page Title: More About Kriging and Non-Kriging Algorithms

What are the Differences between Kriging and Non-Kriging Algorithms? The comparison below applies only to the modeling of petrophysical properties, not structure.

• Kriging algorithms use variograms to guide the weighting of the data points; non-kriging algorithms do not.

• Kriging algorithms allow valid statements to me made relative to the

probability of the results of certain calculations; non-kriging algorithms typically do not. [q 3]

• Kriging algorithms produce grids whose variance is minimized; typically

non-kriging algorithms do not. As a result of this characteristic, kriging algorithms tend to produce grids whose values remain within the range of the data whereas non-kriging algorithms sometimes tend to project slopes inherent in the data. [q 1]

• Kriging algorithms tend to produce grids which preserve the percentages of

data ranges inherent in the original data.

• Kriging algorithms will decluster two close data points, providing better weighting for both; non-kriging algorithms do not. [q 2]

• Kriging always accommodates anisotropic weighting; only a few non-kriging

algorithms will do this.

• Kriging allows for a variety of alternative “extrapolation” algorithms to cover grid areas having only very sparse data.; in general, non-kriging algorithms have only one alternative.

• Non-kriging algorithms tend to be simpler to use, not requiring the creation of

a variogram.

• Kriged maps can appear noisy compared to maps made by non-Kriging algorithms. This is because many non-Kriging algorithms have built-in smoothing algorithms which are designed for more aesthetically pleasing results, and may not take established probabilities into consideration.

Page 65: 3D Modeling Primer

Questions for review:

1. Kriging algorithms minimize the _____________ of the error. (variance). 2. Kriging algorithms will __________ close data points. (decluster). 3. When using kriging algorithms, accurate statements can be made about the

___________of certain results. (probability) 4. Non-Kriging algorithms do not use ________________ . (variograms) 5. Kriging algorithms are typically easier to use than non-Kriging

algorithms T/F. (F)

Page 66: 3D Modeling Primer

Page Title: More About discrete and continuous data

The most common properties to model with geostatistics are facies and petrophysical properties. Discrete data is data whose values are based on a classification scheme (0=floodplain, 1=levee, 2=channel sand), and continuous data is data whose values represents real numbers, such as porosity, whose value can be in the range of zero to 1.0. Most geostatistical operations distinguish between these two types of data, providing separate algorithms for each type. For example, Sequential Indicator Simulation is a type of simulation algorithm which should be used with discrete data, whereas Sequential Gaussian Simulation is used for continuous data such as permeability or porosity.

Review Questions: 1. Discrete data values are based on a _________. (classification scheme). 2. Sequential Gaussian Simulation is used for _______ data sets. (continuous).

Page 67: 3D Modeling Primer

Page Title: More about deterministic and probabilistic algorithms

� Deterministic (creates a single grid) [q 1]

� Deterministic algorithms are also called “Estimation” algorithms

� Use this method when you have plenty of data

� Examples: Nearest neighbor, inverse distance, kriging, � Probabilistic (creates single or multiple grids) [q 1]

� Probabilistic algorithms are also called “Simulation” algorithms

� Use this method when you have sparse data, or very complex facies [q 3]

� Examples: Sequential Gaussian Simulation, Fluvial Simulation, Truncated Gaussian Simulation

� Deterministic

These algorithms take the data values literally and assume that computed grid values between data points have almost a geometric relationship with the points, based only on z-values, slopes between points, and closeness to the node. No attempt is made to preserve the distribution characteristics of the input data. This means that the histogram of the computed grid values may or may not resemble the histogram of the input data.

� Probabilistic

These algorithms take the data values literally as well, but what happens in between the points is a function of more statistical measurement such as frequency and distribution of z-values, both horizontally and vertically. In addition, the algorithm uses a random technique in the selection of data for each computed grid node. A fundamental characteristic of probabilistic grids is that, for any given data set and its parameters, one output grid is equally probable (“stochastic” ) as the next. In fact, it is common to generate “multiple realizations” (versions) of a property and then study their differences and distributions to determine global probabilities as the basis of the final model. [q 2], [q 5]

Page 68: 3D Modeling Primer

Advantages/ Disadvantages

� Deterministic � Advantages: Many to choose from/ Simple to use/ Intuitive [q 4] � Disadvantages: Tend to smooth out highs and lows/ Inappropriate for flow

simulation/ No probability information/ Tend to require plentiful data for good models

� Probabilistic � Advantages: Works well even with sparse data/ Gives a more realistic model in the

case of sparse or difficult data/ Retains highs and lows/ Provides assessment of global probability /Retains distribution character of the original data.

� Disadvantages: More difficult to use/ Time consuming

Review Questions:

1. Deterministic algorithms typically create only _____grid, while probabilistic algorithms create ____________grids. (one, multiple)

2. Multiple realizations simple means multiple __________. (versions) 3. Probabilistic algorithms are used when you have _________data (little). 4. One advantage of deterministic algorithms is that they are _________ (plentiful,

simple, intuitive) 5. Probabilistic algorithms typically involve a ________ generator (random number).

Page 69: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Variogram Basics Page Title: Variogram Roles in Gridding & Geostatistics

At this point in this primer, we will introduce only the most basic facts about variograms. A separate sections of variograms will be presented later. Above, we see a typical variogarm, a measure of variance with respect to distance classes. We will not worry for the moment how to create a variogram, only what it is used for by the gridding algorithms. We will be learning a lot more about variograms, but for right now, all we need to know is

1. the variogram is created directly from the data to be used to create a model 2. the most important measurements shown by the variogram are

• Range – which shows that distance between data points where they cease to have any statistical relationship

• Sill – which shows that value of the variance associated with the Range where the variogram curve begins to flatten out.

• Nugget – which shows how well the data set honors the assumption that close points will be very similar in value.

This portion of the

variogram curve becomes the weight

function for gridding

Page 70: 3D Modeling Primer

What part do variograms play in geostatistics? It is not possible to discuss geostatistics without variograms. They are the basis of the geostatistical algorithms and must be present for the algorithms to be useful. We can summarize the need for variograms as follows:

� They are required for geostatistical algorithms [q 1] � They are very useful as a data analysis tool for the following: [q 2]

� Determine layer thickness (using vertical variogram) � Determine directions/degree of anisotropy (using horizontal variograms)

� They are used as quality control tools to judge the quality of your model [q 2] How are variograms used during gridding? The mechanism of how to create variograms and how to use their analytical capabilities will be covered as a separate topic. Here, we simply want to understand how they are used by the algorithms.themselves. Think of a variogram as a packet of information which helps assign weights to the data points used in the calculation of individual grid values. In particular, variograms provide the following information:

1. a weight function for all data to be used in the calculation of grid values. This weight function is defined in the three primary directions, X, Y, and Z. It can be thought of as an ellipsoid.

2. the range in each direction (X, Y, and Z) for which the variogram weight function is valid. These numbers represent the Major, Minor, and Vertical Ranges. If most data collected to compute a grid values is further away than these ranges, then the grid value is computed using an alternative method which does not rely on the variogram’s weight function. [q 3]

Following will be a few more important facts about variograms which you should know before we can continue with the comparison of 3D modeling algorithms. You will learn the mechanics of making variograms in a later section. Question for Review:

1. We need variograms because they are _________ for geostatistical algorithms (required).

2. The variogram serves as both _________ and_________ tools during modeling. (data analysis and quality control)

3. The variogram has a range in each _____________ for which a weight function is required. (direction)

Page 71: 3D Modeling Primer

Page Title: Facts to Remember about Variograms

In order to continue our discussion of geostatistical algorithms before going into a major discussion of variogramming, it is only necessary to make sure that we remember the following simple facts about variograms. Variogram Fact #1

Variogram Fact #2

The RANGE of the variogram is that distance where data points in your data set begin to LOSE AUTO-CORRELATION . Stated another way, points in your data set which are closer together than the RANGE have a spatial significance in the correlation of their values, but points which are further apart do not. [q 1]

That portion of the variogram curve from zero Distance out to the RANGE, when inverted, becomes the weight function used internally by geostatistical algorithms when computing grid node values. Data which is further than the RANGE to the grid node being computed uses a different weighting scheme. [q 2]

This portion of the variogram curve

becomes the weight function for gridding

Page 72: 3D Modeling Primer

Variogram Fact #3 Variogram Fact #4 Conceptual ellipsoid formed by the 3 variogram ranges

In 3D geostatistical modeling, three variograms are defined for each data set to be modeled – one in the vertical direction and two in the horizontal direction. If the data is isotropic (having no natural directional bias), then the two horizontal variograms are the same. Together, the ranges of these three variograms define a three-dimensional ellipsoidal weight function which is used by the geostatistical component of the chosen algorithm.

The variogram RANGE and the SEARCH RANGE which is specified for a particular gridding operation are two separate concepts. The variogram RANGE has already been described. The SEARCH RANGE specifies how far away from a grid node data will be collected for use in the gridding. The user sets this value. Data further than the SEARCH RANGE will not be used. As suggested above, data closer than the RANGE SEARCH, but further than the variogram RANGE is handled differently. Clearly, the SEARCH RANGE should be LARGER than or equal to the variogram RANGE. [q 3]

Major Horizontal Range

Vertical Range

Minor Horizonal Range

Page 73: 3D Modeling Primer

Review Questions – Variogram Basics

1. The variogram range is that distance within your data set where the individual points begin to lose ___________________ . (autocorrelation)

2. That portion of the variogram curve from zero out to the Range is inverted and then

becomes the ____________________ during gridding. (weight function).

3. The _________range determines how much data is collected to use in gridding a particular note, but the _____________range determines when the weighting scheme changes. (search, variogram)

Page 74: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Terminology and Diagrams Used During Description of Algorithms

Page Title: Disclaimer Disclaimer Those of you who have attained a good understanding of geostatistics, and maybe even GSLIB, will recognize that some of the depictions of gridding mechanics appear oversimplified, especially from a programming or data management point of view. Our intent here is to present large building-block concepts that can be quickly understood and allow the student to become effective with the tools. We do not presume to trace the actual structure and organization of grid-building computer code, nor the fine details of internal data manipulations or programming techniques.

Page 75: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Traditional Estimation Algorithms Page Title: Mechanics of 3D Traditional Estimation Algorithms

A simple view of the mechanics of 3D Traditional estimation algorithms is given below:

� Position at a location where a value requires computation (grid node within some selected zone.

� Collect points in the search zone (defined by the user in various ways) [q 1] � In this example, the Search Distance, D, is a Horizontal Search Range. The

search thickness, T, is measured vertically. � Weight the collected points by distance from the grid node. [q 2] � Compute the value of the grid node using the selected algorithm and parameters. � Move to the next node and repeat the process � If a minimum number of points are not found inside the Search Zone, the node

value becomes null.

0.23

WWWW

Grid node to be calculated

Search zone

Data points D

T

Page 76: 3D Modeling Primer

Questions for Review: 1. Points used in the calculation of the node are collected in the _______zone.

(search). 2. Collected points are weighted by their_________from the node to be computed

(distance)..

Page 77: 3D Modeling Primer

Page Title: A Survey of Some Traditional Estimation Algorithms

� Nearest Neighbor

� Each grid node takes on the value of the closest collected point. [q 3] � Used for Lithology, Rock Type

� Distance to Nearest Neighbor � Each grid node takes the value of the distance to the closest collected point. � Not really a population algorithm, but used for calculations and analysis � Inverse Distance � Each grid node takes the value of a distance-weighted average of all points collected

within the search limit. A variable power parameter determines the weighting – at maximum, the algorithm gives a distance-irrelevant simple average, at minimum, it is equivalent to the nearest neighbor. Anywhere in between, points closer to the node to be computed are weighted higher and further points are weighted lower.

� Inverse Distance Cogridding � If the primary data to be gridded is sparse, then the grid of a secondary, correlated

data set can be used to help guide the gridding of the primary data. If the secondary data, for example has been shown to be correlated to the primary data, then the system will automatically compute the correlation function and transform the secondary data to the domain of the primary.

� When both primary and secondary data are available within the set of collected points during gridding, then the secondary data is weighted lower than the primary. Other than simply providing more data with secondary weighting, this algorithm works the same as the inverse distance algorithm.

Review Questions – Non-Kriging Estimation Algorithms

1. In the Nearest Neighbor algorithm, each node takes the value of the _________ collected

point. (closest)

Page 78: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Kriging Algorithms Page Title: Kriging Workflow

In this section, we’ll talk specifically about Kriging algorithms, but before discussing the actual mechanics of the algorithm, we’ll note that in contrast to non-Kriging algorithms, more data preparation or transformation may be required. For this reason, we’ll show you first about two common transformations that are typically performed on the data before these algorithms are used. Requirement to remove trends Kriging algorithms make certain assumptions about their input data sets. In particular, the assumption of “stationarity” insists that there be no inherent trend exhibited by a data set if it is to be Kriged properly. Stationarity is another one of those imposing words which can usually be replaced with a more natural and understandable word or phrase such as, in this case, “behaving everywhere in the same manner”. Because of the likelihood of the need for data transforms, most 3D modeling software provides a tool to remove the trend from a data set. To tell if the data has such a trend, histograms or variograms of the raw data can be displayed for analysis. If the histogram is “skewed” (mean or median does not fall in the center), then this can be an indication of an inherent trend. As you will also learn when studing variograms, any variogram which does not flatten out horizontally towards the right may also mean that the data probably has a trend in it. Petrel has a Data Analysis dialog which makes the determination of such trends and their removal in advance of modeling quite easy. See below.

Page 79: 3D Modeling Primer

A typical workflow for Kriging might follow these steps:

• Before Kriging any data set, determine if it has a “trend” by either inspecting its histogram or variogram for horizontal convergence to a sill, or by some other means. [q 1]

• If a trend is identified, then remove the trend using the appropriate tool within the software, krig the data, and then add the trend back to the result. The software will actually do the last step for you automatically, as long as it knows that you have removed a trend in the first place. [q 2]

Please make sure that you realize that the “removal of the inherent trend in the data”, as described above, is an operation totally different from the algorithms described later such as “Kriging with External Drift”, or “Kriging with an External Trend”. In these cases, the desire is not the remove an internal trend from the data, but the addition of an observed trend external to the data, which is to make the model more accurate or consistent with a secondary variable. [q 3] Requirement for Normal Score Transform In addition to trend removal, another data transformation, called “Normal Score Transform” is required before making the variogram used for modeling when Sequential Gaussian Simulation or Simple Kriging is to be used. The Normal Score Transform puts the input data into a normal distribution which simply means that the algorithm can make its computations is a much simpler and faster way. [q 4] � How does Kriging work?

Review Questions – Kriging workflow

1. Before Kriging a data set, any internal_________in the data should be removed. (trend) 2. After removing a trend from the data, most software system ____ it back ___________to

the output. (add, automatically) 3. Kriging with an External Trend is one way to remove a trend in the data T/F. (F) 4. Another transformation required on the data by the Sequential Gaussian Simulation

algorithm is called __________________. (Normal Score Transform)

Page 80: 3D Modeling Primer

Page Title: The Mechanics of 3D Kriging

After the data has been transformed as needed, the algorithm computes the grid as follows:

� Position at a grid node location (origin of three pink arrows) � Collect points in the search zone. The shape and size of this data collection area

is specified by the user. Note that the search zone should be as large or larger than the weight ellipsoid depicted below. [q 2] [q 4]

� Weight the collected points by distance according to the variogram weight scheme. This scheme is graphically represented below by the Weight Envelope, a 3D ellipsoid. The three axes of the ellipsoid represent the Major horizontal Range (a below), Minor Horizontal Range (b below), and the Vertical Range (c below). The vertical range is typically much smaller than the two horizontal ranges. The value of these three ranges are specified by the user during the creation of each of the variograms. Each axis of the ellipsoid has its own weight function. The major axis of the ellipsoid is shown oriented East/West, but in reality may point in any direction. In the absence of horizontal anisotropy, the Major Horizontal Range and Minor Range are the same. [q 1], [q 3]

� Compute the grid node value, based on the weights of the collected data points. � Move to the next node and repeat the process

aaaa

bbbb

cccc

Weight envelope and Search zone

Data points

Search zone

0.41

wwww

Page 81: 3D Modeling Primer

Recall that the weight function used in Kriging is simply the model variogram turned upside down. Questions for Review:

1. During Kriging, data points are weighted with a function which is derived from the _______. (variogram).

2. During Kriging, the user specifies the ____________ which defines where data is collected around each grid node to be computed. (Search range)

3. For Kriging, the Weight Envelope is defined by the three variogram ________.

(weights)

4. The Search Range should be ___________than the Weight Envelope. (as big as or larger).

.

.

HIGH WEIGHT

LOW WEIGHT

CLOSE POINTS FAR POINTS

Typical Weight Function Based on Distance

Closer points have high weights

Further points have low weights

Distance From

Grid Node

Weight

Page 82: 3D Modeling Primer

Page Title: Difference Between Search & Variogram Range What is the relationship between the search range and the variogram range (weight envelope)?

� If data points exist inside the Weight Envelope, and there are other points between the Weight Envelope and the Search Range, as shown below, then the variogram weight function does not apply to those points beyond the Weight Envelope, and they are assigned a minimal weight.

� For the case where the minimum required points do not exist inside the Weight Envelope, see the next section.

Variogram Range

Search Range

Map Area

Points inside this circle are weighted using the

variogram

Points between circles are assigned a minimal

weight

Points outside the Search Range are NOT used for

the node value

Shown in 2D rather than 3D for simplicity

Page 83: 3D Modeling Primer

The Condition For Extrapolation

What happens if there are no points as close as the variogram Range for a grid node being computed?

� At some point, Kriging algorithms will determine when less than the minimum amount of data exists with which to compute a value using the Kriging weight function. In these cases, the grid node value is computed with the selected alternate computational method for the particular type of Kriging involved. This condition typically occurs in the corners or at the edges of the map. This condition is similar to the “extrapolation” condition in non-Kriging algorithms. See diagram below which provides a 2D view. This condition can arise whether there are points in the Search Zone or not. [q 1, 2]

Data points

Search Range

Map Area

Variogram Range (Weight Envelope)

Shown in 2D rather than 3D for simplicity

Page 84: 3D Modeling Primer

It turns out that one of the main differences between many of the different Kriging algorithms – Ordinary Kriging, Simple Kriging, Kriging with an External Drift , Co-Kriging , etc., is how “extrapolation” occurs – i.e., how grid values are computed in the cases of little or no data.

� For example, if Simple Kriging were being used in our example, then when no

data is available with which to use the Krig weight function, the node value will be set equal to the average value of all the points (Global Mean).

� If the chosen algorithm had been Ordinary Kriging , then the node value would

be some local average of a subset of the total data. For each type of Kriging we discuss, you will learn it’s alternate method of computation in these “extrapolated” (little or no data) areas.

Here is a graphic example of a data set where some areas contain dense data and some contain sparse data. [q 3]

Data in this area are further away than the variogram range from the grid node being computed. What happens here depends on which Kriging algorithm you use

Data in this area are as close as the variogram range to the grid node being computed. Here, data is weighted according to the variogram.

SPARSE DATA

DENSE DATA

Page 85: 3D Modeling Primer

Review Questions

1. When there is no data in the Weight Envelope, Kriging changes its behavior and makes use of an __________ algorithm for computing the node value. (alternate)

2. If there is no data in the weight envelope, but data in within the search range for a node, then the kriging weight function is applied to that data instead.(T/F). (F)

3. “Extrapolation” is a term used for the method of computing a grid node value when _____________________. (there is little or no data).

4. One of the main differences between the many Kriging algorithms is the way in which they handle _____________ (extrapolation, or sparse data)

Page 86: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Anisotropy Page Title: What is Anisotropy? First we’ll define anisotropy as a characteristic of data sets. It measures the direction and the degree to which attribute values (porosity, rock type, whatever, …) differ. For example, if saturation values do not vary at all when looking towards the East, but vary significantly over short distances when looking towards the North, then this is a clear indication of anisotropy which gives information about how the property is distributed in a particular facies. This information is useful in the gridding step, and allows the system to give preferential weighting to data in a certain direction. Variograms are used to let the system know that a data set is anisotropic. When you create a variogram for your data, you establish the direction and degree of anisotropy using the variogram parameters. With most geostatistical variogramming tools, anisotropy will be defined only horizontally , by defining two horizontal variograms - one for the major direction and one for the minor direction . The ranges of each of these variograms will differ to define the degree of anisotropy. [q 1] , [q 4] The major and minor directions of anisotropy are specified by an azimuth (in degrees), with the minor axis perpendicular to the major. A horizontal variogram for each direction is required. The degree of anisotropy is defined by the ratio of variogram ranges or sills between the major and minor directions. Sometimes, the degree of anisotropy is also referred to as the eccentricity of the ellipse which is formed according to the major and minor ranges for the two directions (axes). When there is no anisotropy, then variograms for the two directions are the identical. We will show more about defining anisotropy with variograms in the next section. [q 2], [q 3]

Page 87: 3D Modeling Primer

An example of anisotropy

� If you were to measure particulate size in the channel complex shown below, its variability across the channels will be much higher than along the channels.

� In geostatistics, we define anisotropy as a characteristic of a set of data values. If

there is a clear difference in how data values change in one direction versus how they change in another direction, then the data set is said to be anisotropic.

� If you suspect this kind of directional bias in your property data source, you should determine its direction and degree by using one of the two methods available:

1. Finding the direction by trial and error with the horizontal variogram tool 2. Use a variogram map to display the direction and degree

Afterwards, use the variogram you create in the modeling operation for the property. When you do have anisotropy in your model, then your variogram will have two horizontal variograms – a variogram for the Major direction and one for the Minor direction. The Major variogram will have a larger Range than the Minor one, and thus the weight function in the x,y plane will be elliptical.

Page 88: 3D Modeling Primer

The Geostatistical 3D ellipsoidal weight function The three variogram RANGES – Major Horizontal, Minor Horizontal, and Vertical are combined into a single weight function for Kriging operations. When there is no anisotropy, then the two horizontal ranges are the same and data in all directions has the same weight.

Grid node to be calculated

Data points

MAJOR AXIS

MINOR AXIS

VERTICAL AXIS

� In the example shown here, we assume a major and minor direction of horizontal anisotropy, each with a different Range. This has no effect on the data searching, but results in an elliptical weight function. The system incorporates weights along the major and minor directions as well as vertically. [q 1]

� Horizontal weights will maximize quicker on the shorter axis than on the

longer axis when there is anisotropy, meaning that points which are further away in the major direction will count for more than closer points in the minor direction.

Page 89: 3D Modeling Primer

Review Questions – Anisotropy

1. Anisotropy indicates that a collection of data shows different characteristics in different __________ (directions).

2. The direction of anisotropy is measured by an _________ (azimuth). 3. The degree of anisotropy is a ratio of the Major and Minor _________. (ranges). 4. Anisotropy typically shows up in the _________ direction (horizontal).

Page 90: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Survey Page Title: A Survey of Some Kriging Algorithms

Simple Kriging � Using the basic mechanism described earlier, Simple Kriging applies the Variogram

weight to the collected points to compute each node. � As the nodes to be computed fall further and further from the data, their values tend

more towards the global mean of the data set. [q 1] Ordinary Kriging � This is the same algorithm as Simple Krig, except that when the nodes fall further and

further from the data, their values tend more towards the local mean of the data. Kriging With Trend � This is the same algorithm as Simple Krig, except that as the nodes fall further and

further from the data, their values tend more towards a trend which is provided by the user in the form of parameters, a simple plane, or other surface. [q 2]

Kriging With External Drift � With this algorithm, standard kriging with the primary data dominates where there is

enough primary data within the variogram range. � Where the primary data is sparse, a secondary, correlated data set can be used

having the following restrictions: � Secondary data must exist at all primary data locations � Secondary data must exist at all output locations � Secondary data, U, is assumed to be linearly related to the primary data, Z, as Z = a0

+ a1*U � Secondary data should be smoothly varying. Co-Kriging [q 3] � With this algorithm, standard kriging with the primary data dominates where there is

enough primary data within the variogram range. � Where the primary data is sparse, a secondary, correlated data set can be used

having the following characteristics: � Secondary data may exist at different locations than primary. � A variogram for the secondary data is required as well as the primary. � A cross variogram between the primary and secondary is required. � This method is labor intensive and slow. It is not used extensively.

Page 91: 3D Modeling Primer

Collocated Co-Kriging [q 4] � With this algorithm, standard kriging with the primary data dominates where there is

enough primary data within the variogram range. � This algorithm gives the advantage of having a secondary, correlated data set, but

is much easier to use than Cokriging, and is much faster. � Where the primary data is sparse, the secondary, correlated data set can be used with

the following restrictions: � Secondary data must exist at all primary locations, but if you have a secondary grid

instead of scatter set, the system will automatically resample it at the primary locations

� No secondary data is required at output locations.

Review Questions – Survey of Kriging Algorithms

1. As data becomes sparser and sparser in the Simple Kriging algorithm, grid values approach the _______ (global mean).

2. As data becomes sparser and sparser in the Kriging with a trend, grid values approach the user-specified ________ values. (trend)

3. Co-Kriging requires _____ variograms (three) 4. Co-located Co-Kriging works well when you have a another __________ data set.

(correlated).

Page 92: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Methods

Chapter: Simulation Algorithms Page Title: Simulation Algorithms vs Other Algorithms

What’s the difference between simulation algorithms and other algorithms? Besides the traditional and Kriging algorithms which we have discussed, geostatistics makes use of Simulation algorithms. Like the Kriging algorithms, this class of population algorithm is well-grounded in statistical mathematics, and offers results which are based on solid probability relationships. For example, all the simulation algorithms we will discuss here can be classified as “stochastic”, that is, able to create output grids with the same probability. We will see that Simulation algorithms differ from other types of algorithms in three major ways: [q 2] 1. Random data selection 2. Stochastic Results 3. Multiple Realization

Simulation Estimation

Results Results

Input Data

Page 93: 3D Modeling Primer

Random data selection One dramatic characteristic of simulation algorithms is that they may not always use all of the available data. When data is collected to compute the value of any grid node, a prime seed number is used to generate random numbers which determine which of the collected data is actually used in the computation. If the same seed prime is used for two executions of a simulation, the results of the two simulations will be identical. If not, they will be different since different data is used in each case. [q 3], [q 4]

Stochastic results When different seed primes are used for two different executions of a simulation algorithms, two different grids almost certainly will result, however, these grids have a very important statistical property – they are said to be “stochastic”, meaning “equally probable” in a statistical sense. Neither is more correct than the other. In the example below, N “versions” of a model of the property data have been created from the same data. Each output grid (realization) has the same probability of being accurate and reasonable as the next grid, even though they will be different.

Multiple realizations Because simulation algorithms are designed to use different seed primes for each execution, and meant to create stochastic results, the idea is to let the algorithm create many realizations (versions) of the output grid and analyze the resulting outputs. By inspecting differences in the output grids, one can get a sense for the “real” solution, which is usually some combination of them. [q 1]

Each output grid is just as “accurate” as the next.

Page 94: 3D Modeling Primer

In some cases, an arithmetic average of all realizations of a particular property may be the “best” solution.

“BEST” may be

average of realizations

Page 95: 3D Modeling Primer

Even though simulation algorithms many not use all the data in all realizations, its results can still maintain the character of the input data, as the comparison below shows:

Gauss simulation

Histogram of input

Kriging

The ability of Simulation to Maintain Data Character

Page 96: 3D Modeling Primer

1.0 Pa 1.1 1.2

Review Questions – Simulation Algorithms

1. Simulation algorithms can create many versions of a model from the same set of input data. T/F. (T)

2. Models (grids) created by simulation algorithms are said to be __________ because they all have the same probability of being correct and valid. (stochastic)

3. Different versions of grids are created from the same data set by randomly selecting which _______ is used to calculate the node value. (data).

4. If the same seed number is used to calculate two grids from the same set of data, then the grids will be ________. (the same)

Page 97: 3D Modeling Primer

Page Title: Survey of Simulation Algorithms – Sequential Gaussian Simulation for Continuous Data Description [q 2] � Data is normalized with Normal Score Transform; then each output grid value is back

transformed upon calculation. [q 1] � This is a Krig-based algorithm and therefore a variogram is required. � Interpolation is performed and so it is suitable for continuous data. � As with many algorithms, the search radius should be large enough so that

null nodes are minimized. � Like most simulation algorithms, a controlled randomness is introduced so

that multiple realizations can be computed. Mechanics

� For each grid node, assign the “data” which can be used (data points or other close

grid nodes). � Establish a random path through the grid nodes � At each grid node which is visited during the random walk, use the assigned data to

compute a cumulative distribution function. This function is based on the mean of the data, the variance, and of course the variogram. This function simply provides a way to calculate a range of reasonable values for the node.

� Pick a random number to select one of the “reasonable” values. Back transform the value, and assign it to the current node being computed.

Page 98: 3D Modeling Primer

Questions for review:

1. Sequential Gaussian Simulation requires your data to be ____________ before the operation begins (normalized).

2. SGS is meant to be used for continuous data – T/F (T).

Page 99: 3D Modeling Primer

Page Title: Sequential Indicator Simulation for Discrete Data

Description � This algorithm is designed for discrete data, but its mechanics is similar to SGS with

only a few exceptions. [q 1] � Unlike SGS, the data needs no normal score transform operation. � A variogram is still required for the cumulative distribution function, but interpolation

per se does not occur, only assignment of grid values via selection within classes. [q 2]

� In Median IK (Indicator Kriging) mode, only a single variogram is required, based on the most representative class data

� In Full IK mode, a variogram for each class is needed � SIS is commonly used to compute lithology /facies grids Questions for Review:

1. Sequential Indicator Simulation is similar to SGS, but is meant to for ___________ data. (discrete). 2. Depending on the mode chosen, SIS may require one ___________for each facies classification to be modeled. (variogram)

Page 100: 3D Modeling Primer

Page Title: Truncated Gaussian Simulation for Discrete Data Description � This algorithm works like SIS, being Krig-based and requiring a variogram.

However, it allows the specification of adjacency rules which define those facies in contact and those which are not in contact. These rules are specified by the sequential order of the class specification list. For example, if your data contains several classes, such as some shales, A and B, and some sands, C and D, then the order in which you specify these classes determines their adjacency rules. If you specify D, C, B, A, then in the output grid, no A nodes can be adjacent to C nodes, and no B nodes adjacent to D nodes, etc. [q 1]

Questions for Review: 1. Truncated Gaussian Simulation is designed to accommodate ____________rules in discrete models such as facies. (adjacency)

Page 101: 3D Modeling Primer

Page Title: Object Modeling for Facies Description � This algorithm is a departure from the previous simulation algorithms since it

provides dimensioning and orientation parameters to describe the size, position, and shape of geologic objects which are the result of fluvial deposition. The input data are well logs of lithology and facies which guide the simulation and whose values and percentages of facies distribution are honored. Typical dimensioning parameters are shown below. [q 1]

� The user picks one of several shape templates (levee, channel, splay, etc.) and

provides enough information regarding length, height, width, shape, density, orientation, and positioning to populate the selected unit.. A background, or floodplain object is also provided.

m

n

m

e

s

w

k

Amplitude and Wavelength

Page 102: 3D Modeling Primer

Review Questions – Survey of Simulation Algorithms 1. Object Modeling differs from SGS and SIS because it provides dimensioning and

orientation ____________ to establish size, position, and shapes of geological objects to be modeled. (parameters)

Page 103: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Operation

Chapter: Gridding Guidelines Page Title: Overview of the property modeling procedure

The property modeling procedure Before property modeling can begin, the following tasks must be assumed to be complete:

• Structural modeling (both horizons and faults) [q 1]

• Definition of a 3D grid geometry (including and boundary and vertical components such as horizons, zones, and layers)

• Preparation of input data for facies and petrophysical properties, including

editing and upscaling.

• Preparation of variograms for facies and property data Because of the simple fact that petrophysical properties such as porosity can behave differently in different facies and depositional environments, different modeling techniques may be required for the same property in different facies. [q 2] Two-Step Mapping – Conditioning Petrophysical Properties to the Facies A well-designed 3D property modeling system, therefore, will allow and even encourage the facies to be modeled first. Then, it should provide methods to allow the subsequent modeling of petrophysical properties to be “conditioned” to the existing facies model(s). In essence, during property modeling, the existing facies model is used as a template or “mask” which allows the property data within each classification, such as a channel sand, to be isolated for modeling purposes. [q 3] This two-step mapping fits exactly into our previous description of two of the major data types – discrete and continuous, as well as our previous categorization of gridding algorithms in the same way. We note that modeling facies requires discrete algorithms and modeling of petrophysical properties requires continuous algorithms. [q 4] In general, modeling of net/gross, porosity, and permeability can be relatively straight forward, making use of the same 3D layering scheme for all three. Saturation modeling, however, can become more complicated, requiring accommodation of different layering schemes due to relevance of the contact location and orientation, as well as the application of a function.

Page 104: 3D Modeling Primer

Review Questions – Gridding Guidelines

1. Before starting 3D property modeling, it is assumed that _____________modeling has been completed. (structural)

2. Different facies may cause properties to behave _____________ .(differently) 3. _________modeling should precede ___________ modeling (Facies, petrophysical

property) 4. Facies modeling requires __________algorithms and petrophysical properties require

___________ algorithms. (discrete, continuous)

Page 105: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Operation

Chapter: Algorithm Selection Page Title: Algorithm Selection

How do I know which algorithm to use for my data? Criteria for Selection of Geostatistical Algorithms As we see, there are many geostatistical algorithms from which to choose, a few with very specific uses. In this section, we’ll organize some of the common reasons why you would use one algorithms and not another for your data. In some cases, the choice will be obvious; in others, it may be up to personal preference. For the algorithms described in this presentation, the following criteria should be used to choose your algorithm. It will not take long for you to get a sense of these criteria so that a logical choice will become second nature. Density and Extent of data

• How much data do you have? • Does it cover the entire area you want to map? [q 1] • Does the data contain or represent all characteristics of the model

which you know to be true? Distribution of data

• Does your data have unique characteristics? Anisotropy? [q 2] Type of data (continuous or discrete)

• Does your data represent a continuous property value or a lithology classification?

Type of solution desired (deterministic or probabilistic)

• Is there enough data for a deterministic solution? • Would the model benefit by describing it characteristics in addition to

data values? • Is there a need for equally-probable models be generated and studied? • Do I want to condition to facies during modeling?

Alternate Sources of Data [q 3]

• Is there an area in my model where data is particularly sparse? • Is there a second data set available which covers an area not covered

by my primary data • Does the secondary data set show that it is correlated to the primary

data?

Page 106: 3D Modeling Primer

Review Questions – Choosing an Algorithm 1. The amount of __________ which is available has a lot to do with which algorithm you

might choose for gridding. (data) 2. Data which exhibits different characteristics in different directions is said to be

_________, and should be considered when selecting an algorithm (anisotropic). 3. A second source of data can sometimes be used in gridding, but the second data source

should be __________ with the first set (correlated).

Page 107: 3D Modeling Primer

Page Title: Selecting an algorithm for discrete or facies modeling A procedural diagram for the selection of an algorithm for DISCRETE data. \

Page 108: 3D Modeling Primer

Page Title: Selecting an algorithm for petrophysical modeling

A procedural diagram for selecting an algorithm for CONTINUOUS data.

Page 109: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Operation

Chapter: Quality Control Procedures Page Title: Quality Control During Modeling

During the population of all facies and petrophysical property grids, there are clear and simple methods to ensure at least basic quality control checks for your model. In another section, we will also look into the ways to determine the probabilities of your model and the subsequent reservoir volume calculations. Again assuming that the 3D grid geometry has already been established, a typical modeling workflow can be specified as follows:

• Analyze/understand your data with various geostatistical tools • Univariate statistics • Histogram • Variograms • Etc…

• Transform input data as required (threshold, lump, upscale …) for your data • Perform quality control checks to make sure that the transformed data has

the same characteristics as the original data.l • Use the selection criteria/choose an algorithm appropriate for the data • Execute the algorithm to populate the grid • Perform quality control checks

The basic quality control checks can be described as follows [q 1] Quality Control by Visual Inspection [q 2] Inspect the output grid graphically and make sure it honors the input data and looks reasonable. Based on the algorithm chosen, trends you see in the data may or may not be repeated in the output grid. If not, this fact should be revealed by the other QC tests. Quality Control with Univariate Statistics [q 4] Look at the min/max and other statistics of both the original data, the condensed (upscaled or lumped) data used by the gridding algorithm, and the output grid itself. If the condensed data statistics deviates from that of the original data, then the condensing step did not work properly. If the output grid statistics deviates from the condensed data, then the gridding step did not maintain the characteristics of the data. Small differences are to be expected, but significant differences indicate that something may have not worked as

Page 110: 3D Modeling Primer

you intended. Checking the range of z-values of the input data versus the output grid is one of the easiest and most useful of QC checks. Quality Control with Histograms [q 3] Perform the same comparisons as above, but use the histogram for comparison. Here, you want to make sure that the condensed data has the same basic shape as the original data, and that the histogram of the output grid has the same basic shape of the condensed data. Quality Control with Variograms Perform the same comparisons as above, but use variograms as the criteria. Variogram shapes and other characteristics should remain the same to verify that the character of the output grid reflects that of the input data. Quality Control for the Secondary Data Set When a secondary data set is used, verify that those areas devoid of primary data are reasonably defined by the secondary data. Visual inspection is usually sufficient, although polygonal statistics may be available in your software. Quality Control by Cross-Validation With this method of quality control, you may conduct cross-validation tests in which you regrid the property after removing one or more data points. Differences between the grids are calculated, and then analyzed for statistical significance. This is one way to determine how the chosen algorithm is honoring differences in data point values based on the distance between them.

Review Questions – Quality Control 1. The last procedure in the typical modeling workflow is ____________ (quality control). 2. ___________inspection is one way to ensure that input data is honored by the gridding

algorithm. (Visual) 3. Quality control using histograms involves checking the __________ of the curves of the

input versus the output. (shape) 4. Checking the range of ___________ in the input versus the output is one of the easiest and

most useful QC checks.

Page 111: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Operation

Chapter: Probability Options Page Title: How Probable Are My Models & Volumes?

When data is quite plentiful , we can have high confidence in our geological and/or property models in most cases. This is true regardless of the algorithm used, since almost all accepted algorithms will produce reasonable results where good data is available. [q 1] When data is missing in some areas, or is, in general, quite sparse, then this is a different situation. If hard decisions are required in the face of sparse data or questionable models, it is the geostatistical simulation tools which can provide the powerful capabilities for the quantification of confidence and probability values. [q 2] � Let us return to the concept of Stochastic Simulation. We recall that this is simply

the process of creating several “versions” of a property model from the same data set, where each version has just as much probability of being accurate and reasonable as the next. Typically the data sets are small and while the stochastic grids will all honor the data at data locations, they may vary significantly between these locations. For example, using Sequential Gaussian Simulation, we can use a single saturation data set to create 10 “versions”, or realizations of the data. Each of the saturation grids thus created may look quite different, but are said to be “equally probable”. Using these different grids of equal probability allows us to study them for common characteristics, knowing that any characteristic which shows up on a high percentage of the grids has a high probability of being a valid and very probable reality.

Page 112: 3D Modeling Primer

Questions for Review: 1. When data is quite plentiful, then __________ algorithms may not be needed.

(simulation) 2. Geostatistical tools and algorithms make it easy to ____________ confidence and probability values.

Page 113: 3D Modeling Primer

Page Title: Selecting a “best” realization or scenario

It is possible that while analyzing all the realizations of your grids, a single grid was found to be most representative of your concept of a correct model. This realization can be reproduced by simply using the same algorithm and seed number associated with the realization. Each realization uses a different prime seed number for random number generation on which data selection is based.

Averaging Stochastic Realizations

This is an intuitive process whereby the final solution is taken to be the average of all N realizations. This can work well for petrophysical properties, but not for discrete properties, such as facies. Of course, it is also possible to average some “best” subset of all realizations as well. [q 3]

Computing Probability Above or Below a Cutoff

In this process, the system allows the computation of a 3D probability grid . This grid is computed from the N realizations of the stochastic set and provides a percent probability that the corresponding grid values in the property grid will be above or below some specified value. Computing Cumulative Distribution Functions or Probability Histograms

for Volumes In the simplest form of this procedure, the values of the property grids used in the volumetrics operation are not changed, but the contacts are allowed to vary according to chance (Monte Carlo simulation), giving multiple stochastic grids for gross volume, net pore volume, net pay, or STOIIP, as requested. For each selected output, you can then display a cumulative histogram depicting the 10, 50, and 90% probability cutoff such as for STOIIP. In the diagram below, these percentages are respectively indicated by the left, middle, and right vertical dotted lines. One can directly read off the STOIIP from the x-axis where, for example, at the rightmost dotted line, “there is a 90 percent probability that the STOIIP will be less than (the STOIIP value at the dotted line)”. [q 4] Displays like this do not necessarily determine the “best” version of some property; but quickly allow the end user to become comfortable with one or more easily discernable scenarios, from which major decisions can be made about the reservoir. Note that displays like the one below can be generated only with many realizations.

Page 114: 3D Modeling Primer

The Ten, Fifty, and Ninety Percent Uncertainty Distribution of STOIIP Based on Stochastic Variance of the Fluid Contacts in the Petrel Volumetics Process Uncertainty analysis for multiple, simultaneous properties Other uncertainty tools are being developed to allow all the individual properties, as well as the contacts, to be stochastically varied in the same operation, giving complex combinations of possibilities, but tracked by the software and presented in summary form as cumulative distribution displays. .

Page 115: 3D Modeling Primer

Review Questions – Quality Control 1. One technique for finding the “best” solution for a set of stochastic grids is to __________

all the results. (average) 2. The y-axis of a cumulative distribution function of STOIIP shows the ____________

which yielded STOIIP in the specified range.

Page 116: 3D Modeling Primer

Subject: Part 3- Modeling Section: Gridding Operation

Chapter: Comparative Table of Gridding Algorithms Page Title: Table of Gridding Algorithms A Comparative Table of Gridding Algorithms

Page 117: 3D Modeling Primer

Further study � Many texts are available. Make use of the internet search tools for specific topics. � Highly recommended: � Applied Geostatistics, Edward H. Isaaks and R. Mohan Srivastava, Oxford University

Press, 1989 � GSLIB – Geostatistical Software Library and User’s Guide, Clayton V. Deutch and

Andre G. Journell, Oxford University Press, 1992

Page 118: 3D Modeling Primer

S

chlumberger P

rivate

Subject: Part 4 – Variogram Concepts Section: Purpose of This Topic A VARIOGRAM Primer

Purpose of this topic The first purpose of this final topic in the Property Modeling Primer is to review the basic concepts of variograms and the assumptions regarding the generic workflow for their creation. The second purpose is to provide a clear overview of the ways in which variograms can be created in Petrel.

Page 119: 3D Modeling Primer

S

chlumberger P

rivate

Section: Review Chapter: Review Page: - Review of Basic Facts Review of Basic Variogram Facts The basic statistical assumption when modeling properties is “Data values which are close together are more likely to be similar than values which are far apart” [Q 1,2] The variogram is a tool for measuring this spatial relationship of any attribute of a population of 3D points (X, Y, Z, Attribute). The y-axis shows increasing attribute variance and the x-axis shows increasing distance. Each point on the variogram shows the variance of all the points in the same distance group (the same approximate distance from each other). Those groups of data points which are closer together are represented on the left of the graph, and those which are farther apart will be on the right . Points high on the graph are represent groups of points which are quite different , while those points lower on the graph represent those groups of points whose values are more similar . As you see, graph points which are low and to the left represent groups of data points which are close together and similar in values while graph points which are high and on the right represent groups of data points which are far apart and dissimilar. [Q 1] [Q 1]

“Classic” experimental variogram shape

. .

[Q 3, 4, 5]

Page 120: 3D Modeling Primer

S

chlumberger P

rivate

Questions for review:

1. Data values which are similar are likely to be ___________ . (closer) 2. Data values which are dissimilar are likely to be ____________(further apart) 3. The Y-axis on the variogram shows ____________(variance) 4. The X-axis on the variogram show _____________(distance) 5. Variogram points on the graph which are low and to the left represent groups

of data pairs which are ___________ in value.

Page 121: 3D Modeling Primer

S

chlumberger P

rivate

Page: Review of basic facts – variogram model The experimental, or sample variogram points above are calculated by the system. In a separate operation, the user fits a curve through these points, modeling the variogram, as below. [Q 1,2] Critical measurements on the variogram model are: � Range – that distance beyond which data points no longer exhibit any statistical

similarity [Q 4] � Nugget – the Variance where the distance is Zero. A non-zero value indicates close

points in the data set which do not have similar values [Q 3] � Sill – that Variance where the summary plot flattens out to random similarity

Questions for review:

1. The experimental variogram is computed by ______________ (the system). 2. The user ___________(models) the experimental variogram by fitting a

___________(curve) through its points. 3. The nugget shows variance where the distance is ___________(0.) 4. The range shows that ___________ (distance) where data points cease to have

any statistical similarity.

Page 122: 3D Modeling Primer

S

chlumberger P

rivate

Page: Review of Anisotropy Review of Anisotropy An anisotropic formation is one with directionally dependent properties. The most common directionally dependent properties are permeability and stress. Most formations have vertical to horizontal permeability anisotropy with vertical permeability being much less (often an order of magnitude less) than horizontal permeability. Bedding plane permeability anisotropy is common in the presence of natural fractures. Stress anisotropy is frequently greatest between overburden stress and horizontal stress in the bedding plane. Bedding plane stress contrasts are common in tectonically active regions. Permeability anisotropy can sometimes be related to stress anisotropy [Q1] - from the Schlumberger Geological Dictionary In property modeling, a property is said to be “anisotropic” if its values demonstrate a strong bias in one direction or another. That direction along which property values are consistently the same is called the Major axis of anisotropy. Perpendicular to this direction, property values vary more rapidly, and this is called the Minor axis of anisotropy. Anisotropy is measured only in the horizontal direction, and is typically associated with an ellipse, where the major axis is the long axis and the minor axis is the short one. Anisotropy units are direction and eccentricity (ratio of major/minor axes length)

MAJOR AXIS Property values are most similar

along this direction

MINOR AXIS Property values vary more

in this direction

Page 123: 3D Modeling Primer

S

chlumberger P

rivate

Review: Two Kinds of Horizontal Anisotropy

Geometric Anisotropy [Q 2] Sill is constant; Range changes. Major axis shows one range, minor shows another.

Zonal Anisotropy [Q 3] Range is constant; Sill changes.

In some software, the Sill is always shown to be normalized to 1 and it makes the identification of zonal anisotropy difficult.

γ

Distance

0 10 20 30

0.0

0.4

0.8

1.2

γ

Distance

0 10 20 30

0.0

0.4

0.8

1.2

1.6

Questions for review:

1. The most common directionally dependent properties in geology are __________ and ___________ (stress and permeability).

2. In ___________ (geometric) anisotropy, the major and minor axes exhibit different ranges.

3. In ___________ (zonal) anisotropy, the major and minor axes exhibit different sills.

Page 124: 3D Modeling Primer

S

chlumberger P

rivate

Page: Review of Variogram Directions and Types Review of Variogram Directions and Types When modeling a property in 3D, we make three variograms of the input data: one for the vertical direction and two for the horizontal direction (major and minor) because of the possibility of anisotropy.

� The vertical variogram shows how variance changes with distance vertically, or along the wellbore [Q 1]

� The horizontal variograms shows how variance changes with distance in the X,Y direction. [Q 2] When the property is anisotropic in the X, Y plane, the ranges in these directions (major and minor) are different, which leads to an elliptical weight function in the X, Y plane. [Q 3]

Note: It is typical that vertical variograms will look better than horizontal variograms, i.e., exhibit a more “classic” variogram shape. This is because of larger numbers of data points vertically than horizontally when well logs are the primary data source.

During population of a 3D grid, the vertical and horizontal components of the variogram are blended together into a single variogram for convenience and speed of computation. Horizontal weights and vertical weights are typically different, but the system sorts this out automatically. The shape of the variogram curve is inverted and becomes the WEIGHT FUNCTION during gridding. Points further than the Range have effectively no weight during gridding.

Range

Questions for review:

1. A ____________(vertical) variogram shows how variance changes vertically. 2. A ____________(horizontal) variogram shows how variance changes

horizontally. 3. ____________(horizontal) variograms can have major and minor components. 4. During modeling, data points which are further away than the

_________(range) from the node being computed have effectively no weight.

[Q 4]

Page 125: 3D Modeling Primer

S

chlumberger P

rivate

Section: The Big Picture Chapter: Generic Workflows Page: How do you make a variogram? How do you make a variogram? Here is a simple outline of a generic workflow for making variograms. Similar mechanics should be provided by any quality software which offers geostatistical features. Recall that every data set will ultimately be defined by 3 variograms – two horizontal (which may be the same), and one vertical. Prepare your data

1. Load your data, then edit it, as desired, clipping away those portions which you feel are non-representative. [Q 1] Inherent trends in the data should be removed before variogramming. Use the Data Analysis tools in Petrel. If the variogram will be used for Sequential Gaussian Simulation or Sequential Indicator Simulation, also do a normal score transform on your data first. [Q 2] In general, use as much data as possible when variogramming; and so, in Petrel, select to use the raw logs, not just the upscaled logs. [Q 3].

Determine presence of anisotropy

2. Perform the process to determine if anisotropy exists in the X/Y directions. This can be done in two ways. Refer to the section following for the procedure

Compute horizontal experimental variogram

3. Begin the horizontal variogram by working in the major direction first:

• If anisotropy was determined in step 2, set the major azimuth to that direction. If no anisotropy was determined, leave the azimuth at zero.

Find the best experimental variogram shape

� Set the lag distance (size of bin for grouping the data) which is relevant to the distance between data values in the X/Y direction, say, 50-100m, or possibly larger [Q 4]

� Adjust the lag distance and the search cone to obtain the best experimental

variogram shape. The experimental variogram is simply the collection of variance values for the point pairs in each classification as, for example, below. As you change the lag distance and search cone, the pattern of points will change. Look for the classic shape shown earlier in this document.

Page 126: 3D Modeling Primer

S

chlumberger P

rivate

Model the horizontal component(s)

4. If, by changing all the parameters at your disposal, you are not able to obtain an experimental variogram which resembles the classic shape, it is possible that there are no spatial relationships in the data set, and you should simply select some reasonable defaults to use for the Range, Sill, and Nugget. The experimental variogarm above is close to a good example of data which has no particular spatial relationships.

5. If the experimental variogram does make a shape that you can follow, then start

the process of modeling the horizontal variogram. Modeling a variogram is the term used for fitting a specific curve through the experimental variogram points.

� Begin by selecting a model type which defines the basic mathematical

characteristic of the curve you will fit through the “experimental” variogram points. Experiment with each to see how the curve changes.

Spherical

Exponential Gaussian

� Model the variogram by interactively changing the Range, Sill, or Nugget to

adjust the shape of the variogram curve, fitting it through the experimental variogram points, as desired. [Q 1] Pay most attention to the points on the left of the graph which rise from Nugget to Sill. [Q 2]

Range: where does the curve start to flatten? Nugget: where does the curve intersect the vertical axis? Sill: what’s the value of the flat part of the curve to the right?

When you are satisfied with the curve, move to the next step.

Page 127: 3D Modeling Primer

S

chlumberger P

rivate

Finish horizontal variogram(s) by modeling minor component 6. If you had determined that anisotropy exists in the horizontal direction, finish the

horizontal variogram(s) by modeling the minor horizontal variogram in the same way as the major described above. You will not be able to changes the direction of the minor variogram, since its direction is always normal to the major axis. You will be able to change only the Range, Sill, and Nugget.

7. As a quality check, you should find that the Range of the Minor horizontal

variogram (m) will be smaller than the Major (M) as depicted below. 8. If there is no anisotropy, you may ignore the minor variogram, since it will, by

default, be the same as the major in the absence of anisotropy. Model the vertical component 9. Now you can begin to model the vertical variogram - that component of the total

variogram which measures variance upwards along the wellbore. For vertical variograms, anisotropy is not an issue – everything is simply up and down.

� Start with a lag distance (size of bin for grouping the data) which is relevant

to the distance between data values in the vertical direction, say, 1 meter, or maybe even smaller. [Q 4]

M m

Weighting ellipse in the X/Y plane

Page 128: 3D Modeling Primer

S

chlumberger P

rivate

� Adjust the lag distance and the search cone to obtain the best experimental variogram shape and continue just as you did to model the major horizontal variogram

� Select the model type and fit the curve through the points just as you did for the horizontal variograms. Note that it is not uncommon to get better defined vertical variograms than horizontal variograms because of the large amount of data. You may also see more events in vertical variograms, such as multiple sands which show up as a series of sinusoidal curves in the experimental variogram. Clearly only one “feature” can be defined with current variogram analysis, and so ignore all the curves except the leftmost one in these cases.

10. As a quality check, you should find that the range of the vertical variogram should be much smaller than the range of the two horizontal variograms.

11. Note that it is irrelevant whether the vertical or the horizontal components of a

variogram is modeled first.

Questions for review:

1. Before making a variogram, data which is _________________(non-representative) should be edited, clipped, or thresholded.

2. A _______________(normal score transform) should usually be done on your data before you make a variogram, if SGS or SIS will be used.

3. It is typically better to use _____ (raw) logs for variograms. 4. When doing vertical variograms, you should start with a lag size in the

neighborhood of ______ (1. to 2.) feet for typical boreholes. 5. “Modeling” a variogram means adjusting its critical parameters so that a curve fits through the ________________ (experimental variogram) points. 6. Besides model type, the most critical parameters which define the shape of the variogram are ___________, ____________, and ___________(range, sill, and nugget).

Page 129: 3D Modeling Primer

S

chlumberger P

rivate

Page: How do you determine anisotropy? How do you determine anisotropy? Method 1 - Determine anisotropy by Trial and Error [Q 2]

� Set the horizontal lag distance to some value which is representative of the horizontal distance between your data points. A good starting point is the distance between the closer points in your data set.

� When searching for anisotropic directions, the horizontal variogramming

tool in your software should allow you to change the Major direction of your experimental variogram to any compass direction you desire. Start with the Major direction set to North as a simple convention.

� Adjust the lag distance and the search cone or search angle to obtain the

most “classic” or best variogram shape as possible.

� Evaluate experimental variogram shapes in various directions from North to South in 30 degree increments. Find that direction which gives the best shape. Focus on the direction giving the best shape and now vary the direction in 10-degree increments on either side to fine-tune your result, noting the final angle of the Major direction.

Method 2 - Determine anisotropy by using a Variogram Map or Surface [Q 3]

� If your software allows it, move to that dialog from which a variogram map can be made. This should be a relatively simple procedure, with a contour map being automatically generated by your software with few, if any, parameter selections. Make sure to set the search distance, which should be large enough so that a fragmented map does not result.

� View the variogram map, which will be symmetrical on either side of the

major direction of anisotropy. If anisotropy is present, then the symmetry of the contours will reveal a distinct oval shape of an ellipse centered in the middle of the map. Note the azimuth of the longest axis of the ellipse on the map. This will be the major direction of anisotropy which will be specified when you return to the horizontal variogram dialog to begin the actual modeling of the major horizontal component.

Page 130: 3D Modeling Primer

S

chlumberger P

rivate

Questions for review: 1. The first step in defining the horizontal component of the variogram is to

determine if __________ (anisotropy) exists. 2. One method to see if a data set exhibits a directional bias is called

______________ (Trial and Error) and involves changing the ___________(search azimuth) every 30 degrees.

3. When a data set exhibits anisotropy, a ____________(variogram map) will show typically oval contours, revealing the major axis of anisotropy.

4. When a data set is anisotropic, the horizontal component of its variogram has two _____________(axes), one called ____________(major), the other _______ (minor).

Page 131: 3D Modeling Primer

S

chlumberger P

rivate

Page: How do you use variograms to determine layer distance? How do you use variograms to determine the layer thickness? In some systems, you are not given access to variogramming tools until you have already established a layer thickness for your model. This is only an inconvenience since the model geometry can usually be regenerated quickly with a different layer thickness. Be sure not to model any facies or property grids, however, as these will have to be redone if you decide to change the layering in this manner. In other words, if you are going to use this method to help determine layer thicknesses, be sure to do it early. It turns out that the Range of the vertical variogram for any facies or property attribute is a very good measurement of the “natural homogeneity” of the data values in the vertical direction , [Q 2] and, as such, is a good value to use for the layer thickness in one or several zones. [Q 1] If the system you use allows different layering schemes in different zones, then by all means make a brief study of the vertical variogram ranges for facies and properties within these zones. If your system supports only one layer scheme for the entire model, you’ll have to settle on some optimum value for all facies and properties, but if your system allows each zone to be layered independently, you’ll clearly be better off. In addition to, or alternatively, some systems allow facies thickness analysis and proportion analysis to be performed. Any of these methods will provide a valid and rational basis for the specification of layer thickness. [Q 3]

Questions for review:

1. Variogramming can be used to study the natural homogeneity of the data to be modeled, and thereby help determine the optimum ______________(layer distance) for the various zones in the model.

2. It is the _________(vertical) variogram which is studied to determine the natural homogeneity of the data.

3. In addition to variograms, __________________and __________________ in Petrel can be used to help determine data homogeneity.

Page 132: 3D Modeling Primer

S

chlumberger P

rivate

Chapter: Petrel-Specific Variogram Facilities Page: - Overview Making Variograms in Petrel – An Overview This section will be specific to Petrel and provides guidance for the creations of variograms. We will stick to the original premise of this primer and remind you that we will cover only the mechanics of these operations, not the analytical interpretation, or geological meaning of any of the measurements or relationships. Please refer to a more advanced treatment of variogramming techniques, or specific Petrel modeling courses, for interpretive techniques other than the ones mentioned here. There are three locations from which you can create or define variograms in Petrel: [Q1] - In the Settings panel for any data or grid object - From the Facies Modeling and Petrophysical Modeling dialogs - From the Data Analysis function under Property Modeling in the Process Diagram The most interactive and robust methodology resides in the Data Analysis menus, and this is where most variograms are done. [Q4] The least interactive method involves simply filling in the critical parameters, such as Range, Sill, and Nugget in the modeling dialogs themselves. The Settings panel provides a semi-interactive method for creating variograms, but does have the advantage that several variograms can be seen on the same display at once, [Q2] and they are retained as graphic entities. Only limited transforms are available from the Settings panel. In particular, normal score transforms are not available. However, variogram maps are made in the Settings panel. [Q3]

Questions for review:

1. Name the three locations in Petrel where variograms can be specified or created. ___________, ___________, ______________. (Object’s setting, Modeling dialogs, Data analysis)

2. The advantage of making variograms in the _____________ (Settings) is that you can see more than one variogram at once.

3. The advantage of making variograms in the _____________ (Settings) is that you can also make variogram maps there.

4. The advantage of making variograms in the _____________ (Data Analysis) is that it’s the most rigorous and robust way to do so.

Page 133: 3D Modeling Primer

S

chlumberger P

rivate

Page: Variogram Facilites in an object’s Setting tab Petrel variogram facilities from an object’s Settings tab With this method, you create an experimental variogram which is subsequently viewed in a Function window. If you want to model it, you can select a variogram modeling icon from the Function window to complete a variogram which can be used in modeling. From the Setting panel, you also have the choice of creating a variogram map which will automatically reveal the direction and degree of anisotropy. In Petrel, experimental variograms are called “sample” variograms. [Q 3] Sample variograms cannot be used directly in modeling operations. [Q 4]

1. Highlight the object, and right-click on it, selecting “Settings” 2. In Settings, stretch the window to the right until you can see the tab

“Variogram ”, then click on it. 3. In this tab, look at the “Hints” tab, then note that you can set a transform type for

the variogram, its orientation parameters, as well as lag and search radius before the computation begins. [Q 1]

4. After the sample variogram has been created, it will show up at the bottom of the

Input folder in the Petrel Explorer under the Variogram folder.

5. If you then create a new Function window, you can display it as below: [Q 2]

Page 134: 3D Modeling Primer

S

chlumberger P

rivate

6. Use the Make Variogram icon to turn the experimental variogram into your own variogram model by setting the parameters appropriately in the dialog which appears:

7. Each variogram model you create will appear in the data hierarchy under Variograms, as well as graphically in the Function window.

Make Variogram Icon

[Q 5]

Page 135: 3D Modeling Primer

S

chlumberger P

rivate

.

Questions for review:

1. Is it possible to set the lag and the search radius before making a variogram in the Object’s Setting location? - Y or N (Y)

2. Variograms made in the Object’s Setting location can be viewed using a __________ (Function) window.

3. Variograms made in the Object’s Setting location are called ____________ (Sample) variograms

4. Can variograms made in the Object’s Setting location be used in modeling operations just as they are?

5. When viewing variograms from the function window, there is an icon on the lower right which will let you convert the ____________ (sample or experimental) variogram into a _____________ (variogram model).

Page 136: 3D Modeling Primer

S

chlumberger P

rivate

Page: Variogram facilities in the modeling dialogs

Petrel variogram facilties in the Modeling dialogs . To be precise, it is not really possible to create a variogram, in the graphic sense, in the modeling dialogs. Below is the dialog for Petrophysical Modeling. The dialog for Facies Modeling looks pretty much the same. In this mode of providing a variogram for the modeling operation, we cannot see the data, the experimental variogram, nor the variogram model. We can only fill in the critical variogram values in this Variogram tab. [Q 1,2] “Use the Variograms Made in Data Analysis” icon

It is, however, possible, in most modeling dialogs, to refer to and use variograms created in Data Analysis. See the “Use the Variograms Made in Data Analysis” icon, typically on the same dialog row as the algorithm choice. If you use this icon, the modeling will automatically assign the appropriate values for the variogram, although you will not be able to see them from here.

Questions for review:

1. In the modeling dialog, can you create a variogram graphically? – Y or N (N) 2. In the modeling dialog, variograms can be specified by simply typing in the

________(critical) values for the variogram.

Page 137: 3D Modeling Primer

S

chlumberger P

rivate

Page: Variogram facilites in the Data Analysis tool – preparing for discrete data

Petrel variogram facilities in the Data Analysis tool Here, we have the best control over our variogram, although, at present, there is no way to point this tool to arbitrary data such as scatter points or grids as was possible using the Settings tab earlier. This method addresses only 3D model grids and their associated zones, upscaled well logs, and raw well logs. [Q 1]

Preparing to Make Variograms for Facies Models

1. Click on Data Analysis in the Property Modeling section of the Process Diagram 2. Click on the Variograms tab. 3. Select an upscaled facies grid whose variogram you wish to compute. 4. Select the Zone you want (data outside of this zone will not be seen). [Q 2] 5. Unlock the parameters so they are visible 6. Click on “Use the Raw Logs” (we want a bigger sample than only the upscaled

logs). Note that variograms can also be made of the grid models for QC. [Q 3] 7. If you have chosen facies as your property, select which classification you want to

make the variogram for.

Page 138: 3D Modeling Primer

S

chlumberger P

rivate

The next step is to simply follow the generic instructions for creating variograms, either from the workflows outlined in the previous section, or below in the section called “Modeling the Variogram in Data Analysis”. Note on variograms for DISCRETE data Even though all the raw data for one discrete facies class will have the same value, the resulting variogram will provide valuable spatial information about its spatial distribution in 3 dimensions for algorithms such as Sequential Indicator Simulation. These variograms which are computed for discrete data are actually different than those for continuous data.

Questions for review:

1. Although the Data Analysis dialog gives the most robust tool for making variograms, there is no way to point this tool to arbitrary grids and data sets in your input folder – T or F – (T)

2. Can variograms be computed across zones? – Y or N (N). 3. Can variograms of grids be created as well as from raw logs – Y or N (Y).

Page 139: 3D Modeling Primer

S

chlumberger P

rivate

Page: Preparing for Variogramming continuous properties

Preparing to Make Variograms for Continuous Properties If you have chosen a continuous property for which to compute a variogram, you must first decide if you are going to condition those property values to a specific facies distribution . If you have no facies model, you will not have this choice. [Q 1] Conditioning petrophysical properties to a facies model If you have gone to the trouble of making a facies model, and you have at least a reasonable number of wells, then “conditioning to facies”, makes sense as an option during the modeling step. Selection of this option causes the petrophysical data falling in each facies to be treated independently during modeling. For example, if porosity were the property being conditioned, and if the cell being evaluated had been classified as a fine sandstone in the facies model, then only those porosity values associated with fine sandstone would be used in the computation of porosity for that cell. This is called segregation of data values by facies. If the modeling algorithm we choose requires a variogram, then we must address the creation of variograms with respect to the facies distribution as discussed below. A property variogram for each facies represented in the data Because properties behave differently in different faces, if you choose the “condition to facies” option, you will need to make a separate variogram for the continuous property in each facies. For example, make a separate variogram of porosity as it exists within the channel, and others as it exists within, say, the levee and the plain. For each facies class, only the data which is located there will be used in the variogram, and during the modeling. [Q 3] While making a conditioned variogram for one facies, you have an option to copy it and associate it with another. That is, you can copy variogram definitions from one facies to another. [Q 2] Below, you can see the mechanics of preparing to create variograms for continuous properties in the Petrel Data Analysis dialogs, with or without “conditioning”.

Page 140: 3D Modeling Primer

S

chlumberger P

rivate

The mechanics for preparing for variogramming of continuous properties in the Petrel Data Analysis dialog

1 – 6. To prepare for a continuous property, perform the same steps 1 through 6 as for discrete, but pick a property in step 3. After selecting the Zone and the unlocking the parameters, you can proceed to the Variogram tab. If you do not wish to condition the variogram(s) of the property to the facies, then do not click on the Facies button, and simply proceed to make the vertical and horizontal components of a single variogram as described in the generic section earlier or in the section below called “Modeling the Variogram in Data Analysis”. Otherwise, to condition to facies, continue to the next step.

7. Click the Facies button under Zones to condition to the facies, if you have them. Select the facies model you wish to use. It may contain more than one facies classification. If you do not select Facies, data for the entire property will be used and you can skip step 9.

[Q 4] 8. Pick the facies classification whose variogram you want to compute. 9. Follow the generic variogram outline in the previous section or the one below.

Page 141: 3D Modeling Primer

S

chlumberger P

rivate

Questions for review:

1. An option available when making variograms in Data Analysis for petrophysical properties is the ability to ___________ (condition) the variogram of petrophysical properties to a ___________ (facies) model.

2. When creating variograms for discrete data classes, is it possible to “copy” a variogram from one facies classification to another? – Y or N (Y).

3. When conditioning a property variogram to a particular facies in a model, the data used for the variogram must reside only within the selected _________ (facies).

4. If, when making variograms for a petrophysical property and you want to condition the variogram to a facies model, you must click on the ________(facies) button.

Page 142: 3D Modeling Primer

S

chlumberger P

rivate

Chapter: Petrel Interactive Tools and Icons for Making Variograms Pages: - Lag, Azimuth, and Search Angle Icon

- Variogram Display Interactive tools and icons for variogramming in Petrel Below, you see the interactive tools which are provided in Petrel for the manipulation of the experimental variogram and the model variogram. Lag, Azimuth, and Search Angle Icon in Petrel

Petrel variogram display, showing cloud histogram [Q 2,3]

- As yet undefined variogram model, to be dragged by user (blue line)

- Histogram showing how many variogram cloud pairs were averaged to compute each experimental variogram point - Experimental variogram points - Default variogram model

Click/drag here to change LAG Click/drag here to change AZIMUTH Click/drag here to change SEARCH ANGLE

[Q 1]

Page 143: 3D Modeling Primer

S

chlumberger P

rivate

Questions for review:

1. By using the 3-dot icon in the Data Analysis variogram tool, you can modify which three parameters during variogramming? _____________, ______________, ______________ (Lag Distance, Search Angle, Search Azimuth)

2. The histogram superimposed on the Petrel variogramming tool shows how many variogram cloud __________ (pairs) were averaged to compute the experimental variogram point.

3. The blue line on the Petrel variogramming tool can be _______________ (dragged) by the user to model the variogram.

Page 144: 3D Modeling Primer

S

chlumberger P

rivate

Chapter: Modeling the Variogram in the Data Analysis dialog in Petrel Page: Simple Petrel procedure In the previous two sections, you learned how to prepare for variogramming in the Petrel Data Analysis dialogs for two cases – discrete and continuous data. Now, we’ll show you how to create the variogram so it can be used. For both types of data, the creation of the variogram proceeds in the same manner. Previously, the variogramming mechanics were described generically. Here, we show how it works specifically in Petrel, using Petrel terminology and dialogs. Most of the procedure is quite similar to the generic mechanics, to which we will refer. .

• Determine if anisotropy exists

• Determine anisotropy by using the “Settings” method for computing a variogram map as we do here, or alternatively use the previously described Trial and Error method. [Q 1]

- Right click “Settings” on the upscaled property you want to variogram

[Q 2] - Stretch the window open so you can see and click the Variogram tab - Choose Variogram Map and Execute. - Open a map window and display the variogram to see if anisotropy is

revealed. In the left example, there does not seem to be any. If there were, you would see a symmetrical display with the axis of symmetry being the major azimuth of anisotropy, as in the map on the right. Another problem with the left map are the holes which are due to the search range being specified too small. [Q 3] If anisotropy is revealed as on the right by the oval shape of the contours, make a note of the azimuth, in this case about 25 degrees. [Q 4,5]

Page 145: 3D Modeling Primer

S

chlumberger P

rivate

• Optimize experimental major variogram shape– click the “Major” tab

- Regardless of the method used to determine anisotropy, be sure to set

the correct azimuth for the Major direction before computing its experimental variogram. Verify your choice of azimuth by using the 3-button icon to alter it in small increments, making sure that the shape of the variogram is optimized.

- Further optimize the shape of the experimental variogram point

distribution by interactive movement of the lag icon and search icon. With sparse data, increase search range and search angle.

- Experiment with lag distance, seeing if one size produces a clearer or

more classic variogram shape - Note the histogram in the background of the variogram. Use it to

decide if a particular variogram point is relevant and should be included in the model. The higher the column for a point, the more observations contributed to its presence.

- In some cases, the points cannot be seen clearly until you drag the blue

curve all the way to the bottom.

• Model the variogram for the major direction

Once you have the best experimental shape you can get, then - Decide on the model Type (Exponential, Spherical, ...) - Interactively drag the Nugget Point where you think it should be - Interactively drag the Range Point where you think it should be - Click Apply to save the major component.

Page 146: 3D Modeling Primer

S

chlumberger P

rivate

• Compute experimental variogram for the minor direction and model it, if

anisotropic

- This step is unnecessary if the major azimuth is 0.0; that is, when the variogram is to be omnidirectional, or isotropic. Otherwise, click on the Minor tab, then follow the same shape-optimization and modeling steps as for the major variogram, but note that you’ll not be able to change the minor azimuth; it will always be normal to the major azimuth.

- Click Apply to save the minor component.

• Create the Vertical Component of the Variogram

• After you click on the Vertical tab, the procedure is basically the same for vertical variograms as for the major component above, except that you should start with a small lag distance which relates to the sample spacing along the borehole (1ft. – 2ft.). [q 1]

• Click Apply to save the vertical component.

The Power of Variogram Analysis As we mentioned before, there are many relationships, analysis techniques, and interpretive procedures which can be brought to bear during data analysis using variograms. Important facts about the geometry, size, and even composition of a reservoir can be inferred and indeed discovered by simply making a thorough inspection of the available data and grids with variograms. This kind of information is invaluable during modeling and uncertainty studies, but such topics and techniques is not in the scope of this training course. Instead, we recommend perusal of the resources of the Internet, or advanced courses offered by Petrel and others.

Page 147: 3D Modeling Primer

S

chlumberger P

rivate

Questions for review:

1. Before the horizontal component can be modeled, it must be established if _____________ (anisotropy) exists in the data set.

2. To create a variogram map, go to the ________________ (Variogram) tab of an objects ____________ (Settings).

3. If a variogram map exhibits holes, then the _______________ (Search Range) is probably too small.

4. In a variogram map, circular contours centered around the middle of the map suggest that the data set is _______________ (isotropic).

5. The major axis of anisotropy can be measured by the ____________(azimuth) of the long side of the oval shapes defined by the contours on a variogram map.

6. When creating a vertical variogram, the lag should be initially set to a _________ (smaller) number than you would set for horizontal variograms

Page 148: 3D Modeling Primer

S

chlumberger P

rivate

Page: Using the variogram

Using the Variogram

Once the variogram is saved, you can always return to Data Analysis and modify it if you like. Variograms made in this way are available for use during property population . As you begin to model your facies or property, make sure that you click on the icon in the modeling dialog named “Use Variograms Created in Data Analysis”. [Q 1] as we discussed earlier. Recall that even though these variograms are available to the modeling algorithms, neither the variogram, nor its parameters can be seen from there. You must return to the Data Analysis dialogs to be able to see the variograms and their parameters. [Q 2]

The variograms you see in the Property and Facies Modeling dialogs are the default variograms for modeling. When you click on the icon mentioned above, they have no relation to the variogram which will be used in the modeling. [Q 3]

Questions for review:

1. To use a variogram created in Data Analysis, you should click on the “Use Variograms in Data Analysis” icon in the _____________ (Modeling) dialogs.

2. When modeling, the variograms created in Data Analysis can be edited directly. – T or F (F).

3. When the “Use Variograms in Data Analysis” icon is turned on, the variogram shown in the modeling dialog takes on the values of the one created in Data Analysis – T or F (F).

Page 149: 3D Modeling Primer

Self-Study Questions for the Geostatistical Primer

1. What statistical concept does a crossplot display? 2. Wells are typically stretched and squeezed horizontally. T/F 3. What units are used to describe anisotropy? 4. Which is created first, the variogram model or the experimental variogram? 5. What does a histogram measure? 6. Give a simple example of how data is segregated during gridding. 7. The vertical axis of a variogram shows which statistical measurement? 8. Lumping usually decreases the amount of data points used in gridding. T/F 9. How many variables are typically represented in a crossplot? 10. What does the width of each measured column in a histogram represent? 11. What does variance measure? 12. Name and briefly describe the three main characteristics which you measure on

a variogram plot. 13. Give two reasons for making a variogram. 14. A vertical variogram primarily analyzes the relationship among well bore data

in the X/Y direction. T/F 15. The variogram model type determines the overall shape of the variogram curve.

T/F 16. What are the three main differences between Kriging and non-Kriging

algorithms? 17. In 3D estimation gridding, how are the collected points weighted? 18. Name three typical input data types for property modeling. 19. If you have a secondary data set for use in gridding, name a Kriging

algorithm which you might use. 20. Give an example of when you might use a Deterministic algorithm instead of

a Probabilistic algorithm. 21. In Simple Kriging, computed values outside the range of the data will become

closer in value to the local mean. T/F. 22. Name two ways in which simulation (stochastic) algorithms differ from other

types of gridding algorithms. 23. The Table of Gridding Algorithms at the end of the Primer can help determine which gridding algorithm to use for facies modeling. T/F. 24. What is the single most necessary analytical tool for geostatistical algorithms?