2. deconvolution

Upload: bom-perez

Post on 14-Apr-2018

242 views

Category:

Documents


1 download

TRANSCRIPT

  • 7/30/2019 2. Deconvolution

    1/112

    2 Deconvolution

    Introduction The Convolutional Model The Convolutional Model in the Time Domain The Convolu-tional Model in the Frequency Domain Inverse Filtering The Inverse of the Source Wavelet Least-SquaresInverse Filtering Minimum Phase Optimum Wiener Filters Spiking Deconvolution Prewhitening WaveletProcessing by Shaping Filters Predictive Deconvolution Predictive Deconvolution in Practice OperatorLength Prediction Lag Percent Prewhitening Effect of Random Noise on Deconvolution Multiple Atten-uation Field Data Examples Prestack Deconvolution Signature Deconvolution Vibroseis Deconvolution Poststack Deconvolution The Problem of Nonstationarity Time-Variant Deconvolution Time-VariantSpectral Whitening Frequency-Domain Deconvolution Inverse Q Filtering Deconvolution Strategies Exer-

    cises Appendix B: Mathematical Foundation of Deconvolution Synthetic Seismogram The Inverse ofthe Source Wavelet The Inverse Filter Frequency-Domain Deconvolution Optimum Wiener Filters SpikingDeconvolution Predictive Deconvolution Surface-Consistent Deconvolution Inverse Q Filtering References

    2.0 INTRODUCTION

    Deconvolution compresses the basic wavelet in therecorded seismogram, attenuates reverberations andshort-period multiples, thus increases temporal resolu-tion and yields a representation of subsurface reflec-tivity. The process normally is applied before stack;however, it also is common to apply deconvolutionto stacked data. Figure 2.0-1 shows a stacked section

    with and without deconvolution. Deconvolution hasproduced a section with a much higher temporal res-olution. The ringy character of the stack without de-convolution limits resolution, considerably.

    Figure 2.0-2 shows selected common-midpoint(CMP) gathers from a marine line before and after de-convolution. Note that the prominent reflections standout more distinctly on the deconvolved gathers. Decon-volution has removed a considerable amount of ringy-

    ness, while it has compressed the waveform at each ofthe prominent reflections. The stacked sections associ-ated with these CMP gathers are shown in Figure 2.0-3. The improvement observed on the deconvolved CMPgathers also are noted on the corresponding stacked sec-tion.

    Figure 2.0-4 shows some NMO-corrected CMP

    gathers from a land line with and without deconvo-lution. Corresponding stacked sections are shown inFigure 2.0-5. Again, note that deconvolution has com-pressed the wavelet and removed much of the reverber-ating energy.

    Deconvolution sometimes does more than justwavelet compression; it can remove a significant partof the multiple energy from the section. Note that thestacked section in Figure 2.0-6 shows a marked improve-

  • 7/30/2019 2. Deconvolution

    2/112

    160 Seismic Data Analysis

    FIG. 2.0-1. Interpreters prefer the crisp, finely detailed appearance of the deconvolved section (right) as opposed to theblurred, ringy appearance of the section without deconvolution (left). (Data courtesy Enterprise Oil.)

    ment between 2 and 4 s after deconvolution.To understand deconvolution, first we need to ex-

    amine the constituent elements of a recorded seismictrace (Section 2.1). The earth is composed of layers ofrocks with different lithology and physical properties.Seismically, rock layers are defined by the densities andvelocities with which seismic waves propagate throughthem. The product of density and velocity is called seis-mic impedance. The impedance contrast between adja-

    cent rock layers causes the reflections that are recordedalong a surface profile. The recorded seismogram canbe modeled as a convolution of the earths impulse re-sponse with the seismic wavelet. This wavelet has manycomponents, including source signature, recording fil-ter, surface reflections, and receiver-array response. Theearths impulse response is what would be recorded ifthe wavelet were just a spike. The impulse responsecomprises primary reflections (reflectivity series) and allpossible multiples.

    Ideally, deconvolution should compress the waveletcomponents and eliminate multiples, leaving only theearths reflectivity in the seismic trace. Wavelet com-pression can be done using an inverse filter as a de-convolution operator. An inverse filter, when convolvedwith the seismic wavelet, converts it to a spike (Section2.2). When applied to a seismogram, the inverse filtershould yield the earths impulse response. An accurateinverse filter design is achieved using the least-squares

    method (Section 2.2).The fundamental assumption underlying the de-

    convolution process (with the usual case of unknownsource wavelet) is that of minimum phase. This issue isdealt with also in Section 2.2.

    The optimum Wiener filter, which has a widerange of applications, is discussed in Section 2.3. TheWiener filter converts the seismic wavelet into any de-sired shape. For example, much like the inverse filter,a Wiener filter can be designed to convert the seismic

  • 7/30/2019 2. Deconvolution

    3/112

    Deconvolution 161

    FIG. 2.0-2. Note the prominent reflections on the deconvolved gathers (b). The reverberations would make it difficult todistinguish prominent reflections on the undeconvolved gathers (a).

  • 7/30/2019 2. Deconvolution

    4/112

    162 Seismic Data Analysis

    FIG. 2.0-3. (a) The section obtained from the undeconvolved gathers of Figure 2.0-2a, and (b) the section obtained fromthe deconvolved gathers of Figure 2.0-2b.

    wavelet into a spike. However, the Wiener filter differsfrom the inverse filter in that it is optimal in the least-squares sense. Also, the resolution (spikiness) of theoutput can be controlled by designing a Wiener pre-diction error filter the basis for predictive deconvo-lution (Section 2.3). Converting the seismic wavelet intoa spike is like asking for a perfect resolution. In prac-tice, because of noise in the seismogram and assump-tions made about the seismic wavelet and the recordedseismogram, spiking deconvolution is not always desir-able. Finally, the prediction error filter can be used toremove periodic components multiples, from the seis-mogram. Practical aspects of predictive deconvolutionare presented in Section 2.4, and field data examples areprovided in Section 2.5. Finally, time-varying aspects ofthe source waveform nonstationarity, are discussed inSection 2.6.

    The mathematical treatment of deconvolution isfound in Appendix B. However, several numerical ex-amples, which provide the theoretical groundwork froma heuristic viewpoint, are given in the text. Much of theearly theoretical work on deconvolution came from theMIT Geophysical Analysis Group, which was formed inthe mid-1950s.

    2.1 THE CONVOLUTIONAL MODEL

    A sonic log segment is shown in Figure 2.1-1a. Thesonic log is a plot of interval velocity as a function ofdepth based on downhole measurement using loggingtools. Here, velocities were measured between the 1000-to 5400-ft depth interval at 2-ft intervals. The veloc-ity function was extrapolated to the surface by a lin-ear ramp. The sonic log exhibits a strong low-frequencycomponent with a distinct blocky character represent-ing gross velocity variations. Actually, it is this low-frequency component that normally is estimated by ve-

    locity analysis of CMP gathers (Section 3.2).In many sonic logs, the low-frequency component

    is an expression of the general increase of velocity withdepth due to compaction. In some sonic logs, however,the low-frequency component exhibits a blocky charac-ter (Figure 2.1-1a), which is due to large-scale lithologicvariations. Based on this blocky character, we may de-fine layers of constant interval velocity (Table 2-1), eachof which can be associated with a geologic formation(Table 2-2).

  • 7/30/2019 2. Deconvolution

    5/112

    Deconvolution 163

    FIG. 2.0-4. Some NMO-corrected gathers associated with the stacked sections in Figure 2.0-5, (a) before, (b) after deconvo-lution. Deconvolution has removed the ringy character from the data.

    Table 2-1. The interval velocity trend obtained fromthe sonic log in Figure 2.1-1a.

    LayerNumber Interval Velocity,* ft/s Depth Range, ft

    1 21 000 1000 20002 19 000 2000 22503 18 750 2250 25004 12 650 2500 37755 19 650 3775 5400

    *The velocity in Layer 2 gradually decreases from thetop of the layer to the bottom.

    Table 2-2. Stratigraphic identification associated withlayering described in Table 2-1.

    Layer Number Lithologic Unit

    1 Limestone

    2 Shaly limestone with gradualincrease in shale content

    3 Shaly limestone4 Sandstone5 Dolomite

  • 7/30/2019 2. Deconvolution

    6/112

    164 Seismic Data Analysis

    FIG. 2.0-5. Deconvolution helps distinguish prominent reflections with ease (b). However, on a section without deconvolution(a), reflections are buried in reverberating energy. Selected CMP gathers for both sections are shown in Figure 2.0-4.

    The sonic log also has a high-frequency componentsuperimposed on the low-frequency component. Theserapid fluctuations can be attributed to changes in rockproperties that are local in nature. For example, thelimestone layer can have interbeddings of shale andsand. Porosity changes also can affect interval velocitieswithin a rock layer. Note that well-log measurementshave a limited accuracy; therefore, some of the high-frequency variations, particularly those associated witha first arrival that is strong enough to trigger one re-

    ceiver but not the other in the log tool (cycle skips), are

    not due to changes in lithology.

    Well-log measurements of velocity and density pro-

    vide a link between seismic data and the geology of the

    substrata. We now explain the link between log mea-

    surements and the recorded seismic trace provided by

    seismic impedance the product of density and veloc-

    ity. The first set of assumptions that is used to build

    the forward model for the seismic trace follows:

  • 7/30/2019 2. Deconvolution

    7/112

    Deconvolution 165

    FIG. 2.0-6. CMP stacks with (a) no deconvolution beforestack, (b) spiking deconvolution before stack. Deconvolutioncan remove a significant amount of multiple energy fromseismic data. (Data courtesy Elf Aquitane and partners.)

    Assumption 1. The earth is made up of horizontallayers of constant velocity.Assumption 2. The source generates a compres-sional plane wave that impinges on layer bound-aries at normal incidence. Under such circum-

    stances, no shear waves are generated.

    Assumption 1 is violated in both structurally com-plex areas and in areas with gross lateral facies changes.Assumption 2 implies that our forward model for theseismic trace is based on zero-offset recording an un-realizable experiment. Nevertheless, if the layer bound-aries were deep in relation to cable length, we assumethat the angle of incidence at a given boundary is smalland ignore the angle dependence of reflection coeffi-

    cients. Combination of the two assumptions thus implya normal-incidence one-dimensional (1-D) seismogram.

    Based on assumptions 1 and 2, the reflection coeffi-cient c (for pressure or stress), which is associated withthe boundary between, say, layers 1 and 2, is defined as

    c = I2 I1I2 + I1

    , (2 1a)

    where I is the seismic impedance associated with eachlayer given by the product of density and compres-sional velocity v.

    From well-log measurements, we find that the ver-tical density gradient often is much smaller than thevertical velocity gradient. Therefore, we often assumethat the impedance contrast between rock layers is es-sentially due to the velocity contrast, only. Equation(2-1a) then takes the form:

    c =v2 v1

    v2 + v1. (2 1b)

    If v2 is greater than v1, the reflection coefficient wouldbe positive. If v2 is less than v1, then the reflectioncoefficient would be negative.

    The assumption that density is invariant withdepth or that it does not vary as much as velocity isnot always valid. The reason we can get away with itis that the density gradient usually has the same signas the velocity gradient. Hence, the impedance functionderived from the velocity function only should be cor-rect within a scale factor.

    For vertical incidence, the reflection coefficient isthe ratio of the reflected wave amplitude to the incident

    wave amplitude. Moreover, from its definition (equation2-1a), the reflection coefficient is seen as the ratio ofthe change in acoustic impedance to twice the averageacoustic impedance. Therefore, seismic amplitudes as-sociated with earth models with horizontal layers andvertical incidence (assumptions 1 and 2) are related toacoustic impedance variations.

    The reflection coefficient series c(z), where z is thedepth variable, is derived from sonic log v(z) and isshown in Figure 2.1-1b. We note the following:

    (a) The position of each spike gives the depth of thelayer boundary, and

    (b) the magnitude of each spike corresponds to thefraction of a unit-amplitude downward-travelingincident plane wave that would be reflected fromthe layer boundary.

    To convert the reflection coefficient series c(z) (Fig-ure 2.1-1b) derived from the sonic log into a time seriesc(t), select a sampling interval, say 2 ms. Then use thevelocity information in the log (Figure 2.1-1a) to con-vert the depth axis to a two-way vertical time axis. The

  • 7/30/2019 2. Deconvolution

    8/112

    166 Seismic Data Analysis

  • 7/30/2019 2. Deconvolution

    9/112

    Deconvolution 167

    FIG. 2.1-2. A seismic source wavelet after onset takes theform shown at top left. As the wavelet travels into the earth,the amplitude level drops (geometric spreading) and a lossof high frequencies occurs (frequency absorption).

    result of this conversion is shown in Figure 2.1-1c, bothas a conventional wiggle trace and as a variable areaand wiggle trace (the same trace repeated six times tohighlight strong reflections). The reflection coefficientseries c(t) (Figure 2.1-1c) represents the reflectivity of aseries of fictitious layer boundaries that are separated byan equal time interval the sampling rate (Goupillaud,1961). The major events in this reflectivity series arefrom the boundary between layers 2 and 3 located atabout 0.3 s, and the boundary between layers 4 and 5located at about 0.5 s.

    The reflection coefficient series (Figure 2.1-1c) that

    was constructed is composed only of primary reflec-tions (energy that was reflected only once). To get acomplete 1-D response of the horizontally-layered earthmodel (assumption 1), multiple reflections of all types(surface, intrabed and interbed multiples) must be in-cluded. If the source were unit-amplitude spike, then therecorded zero-offset seismogram would be the impulseresponse of the earth, which includes primary and mul-tiple reflections. Here, the Kunetz method (Claerbout,1976) is used to obtain such an impulse response. Theimpulse response derived from the reflection coefficientseries in Figure 2.1-1c is shown in Figure 2.1-1d withthe variable area and wiggle display.

    The characteristic pressure wave created by an im-pulsive source, such as dynamite or air gun, is called thesignature of the source. All signatures can be describedas band-limited wavelets of finite duration for exam-ple, the measured signature of an Aquapulse source inFigure 2.1-2. As this waveform travels into the earth,its overall amplitude decays because of wavefront diver-gence. Additionally, frequencies are attenuated becauseof the absorption effects of rocks (see Section 1.4). Theprogressive change of the source wavelet in time and

    depth also is shown in Figure 2.1-2. At any given time,the wavelet is not the same as it was at the onset ofsource excitation. This time-dependent change in wave-form is called nonstationarity.

    Wavefront divergence is removed by applying a

    spherical spreading function (Section 1.2). Frequencyattenuation is compensated for by the processing tech-niques discussed in Section 2.6. Nevertheless, the simpleconvolutional model discussed here does not incorporatenonstationarity. This leads to the following assumption:

    Assumption 3. The source waveform does notchange as it travels in the subsurface it is sta-tionary.

    The Convolutional Model in

    the Time Domain

    A convolutional model for the recorded seismogramnow can be proposed. Suppose a vertically propagat-ing downgoing plane wave with source signature (Figure2.1-3a) travels in depth and encounters a layer bound-ary at 0.2-s two-way time. The reflection coefficient as-sociated with the boundary is represented by the spikein Figure 2.1-3b. As a result of reflection, the sourcewavelet replicates itself such that it is scaled by the re-flection coefficient. If we have a number of layer bound-aries represented by the individual spikes in Figures 2.1-3b through 2.1-3f, then the wavelet replicates itself at

    those boundaries in the same manner. If the reflectioncoefficient is negative, then the wavelet replicates itselfwith its polarity reversed, as in Figure 2.1-3c.

    Now consider the ensemble of the reflection coeffi-cients in Figure 2.1-3g. The response of this sparse spikeseries to the basic wavelet is a superposition of the in-dividual impulse responses. This linear process is calledthe principle of superposition. It is achieved computa-tionally by convolving the basic wavelet with the reflec-tivity series (Figure 2.1-3g). The convolutional processalready was demonstrated by the numerical example inSection 1.1.

    The response of the sparse spike series to the basicwavelet in Figure 2.1-3g has some important character-istics. Note that for events at 0.2 and 0.35 s, we iden-tify two layer boundaries. However, to identify the threeclosely spaced reflecting boundaries from the compositeresponse (at around 0.6 s), the source waveform must beremoved to obtain the sparse spike series. This removalprocess is just the opposite of the convolutional processused to obtain the response of the reflectivity series tothe basic wavelet. The reverse process appropriately iscalled deconvolution.

  • 7/30/2019 2. Deconvolution

    10/112

    168 Seismic Data Analysis

    FIG. 2.1-3. A wavelet (a) traveling in the earth repeats itself when it encounters a reflector along its path (b, c, d, e, f). Theleft column represents the reflection coefficients, while the right column represents the response to the wavelet. Amplitudesof the response are scaled by the reflection coefficient. The resulting seismogram (bottom right) represents the compositeresponse of the earths reflectivity (bottom left) to the wavelet (top right).

    The principle of superposition now is applied to theimpulse response derived from the sonic log in Figure2.1-1d. Convolution of a source signature with the im-pulse response yields the synthetic seismogram shownin Figure 2.1-4. The synthetic seismogram also is shownin Figure 2.1-1e. This 1-D zero-offset seismogram is freeof random ambient noise. For a more realistic represen-

    tation of a recorded seismogram, noise is added (Figure

    2.1-4).The convolutional model of the recorded seismo-

    gram now is complete. Mathematically, the convolu-tional model illustrated in Figure 2.1-4 is given by

    x(t) = w(t) e(t) + n(t), (2 2a)

  • 7/30/2019 2. Deconvolution

    11/112

    Deconvolution 169

    FIG. 2.1-4. The top frame is the same as in Figure 2.1-1d.The asterisk denotes convolution. The recorded seismogram(bottom frame) is the sum of the noise-free seismogram andthe noise trace. This figure is equivalent to equation (2-2a).

    where x(t) is the recorded seismogram, w(t) is the ba-sic seismic wavelet, e(t) is the earths impulse response,n(t) is the random ambient noise, and denotes con-volution. Deconvolution tries to recover the reflectivityseries (strictly speaking, the impulse response) from therecorded seismogram.

    An alternative to the convolutional model given by

    equation (2-2a) is based on a surface-consistent spec-tral decomposition (Taner and Coburn, 1981). In sucha formulation, the seismic trace is decomposed into theconvolutional effects of source, receiver, offset, and theearths impulse response, thus explicitly accounting forvariations in wavelet shape caused by near-source andnear-receiver conditions and source-receiver separation.The following equation describes the surface-consistentconvolutional model (Section B.8):

    xij(t) = sj(t) hl(t) ek(t) gi(t) + n(t), (2 2b)

    where xij(t) is a model of the recorded seismogram,sj(t) is the waveform component associated with sourcelocation j, gi(t) is the component associated with re-ceiver location i, and hl(t) is the component associatedwith offset dependency of the waveform defined for eachoffset index l = |ij|. As in equation (2-2a), ek(t) repre-sents the earths impulse response at the source-receivermidpoint location, k = (i + j)/2. By comparing equa-tions (2-2a) and (2-2b), we infer that w(t) representsthe combined effects of s(t), h(t), and g(t).

    The assumption of surface-consistency implies thatthe basic wavelet shape depends only on the sourceand receiver locations, not on the details of the ray-path from source to reflector to receiver. In a transitionzone, surface conditions at the source and receiver lo-cations may vary significantly from dry to wet surfaceconditions. Hence, the most likely situation where thesurface-consistent convolutional model may be applica-ble is with transition-zone data. Nevertheless, the for-mulation described in this section is the most acceptedmodel for the 1-D seismogram.

    The random noise present in the recorded seismo-gram has several sources. External sources are wind mo-tion, environmental noise, or a geophone loosely coupledto the ground. Internal noise can arise from the record-ing instruments. A pure-noise seismogram and its char-acteristics are shown in Figure 2.1-5. A pure random-noise series has a white spectrum it contains all thefrequencies. This means that the autocorrelation func-tion is a spike at zero lag and zero at all other lags. FromFigure 2.1-5, note that these characteristic requirementsare reasonably satisfied.

    Now examine the equation for the convolutionalmodel. All that normally is known in equation (2-2a) isx(t) the recorded seismogram. The earths impulseresponse e(t) must be estimated everywhere except atthe location of wells with good sonic logs. Also, thesource waveform w(t) normally is unknown. In certaincases, however, the source waveform is partly known;for example, the signature of an air-gun array can bemeasured. However, what is measured is only the wave-form at the very onset of excitation of the source array,and not the wavelet that is recorded at the receiver.Finally, there is no a priori knowledge of the ambientnoise n(t).

    We now have three unknowns w(t), e(t), andn(t), one known x(t), and one single equation (2-2a).Can this problem be solved? Pessimists would say no.However, in practice, deconvolution is applied to seismicdata as an integral part of conventional processing andis an effective method to increase temporal resolution.

    To solve for the unknown e(t) in equation (2-2a),further assumptions must be made.

    Assumption 4. The noise component n(t) is zero.

  • 7/30/2019 2. Deconvolution

    12/112

    170 Seismic Data Analysis

    FIG. 2.1-5. A random signal with infinite length has a flatamplitude spectrum and an autocorrelogram that is zeroat all lags except the zero lag. The discrete random serieswith finite length shown here seems to satisfy these require-ments. What distinguishes a random signal from a spike(1, 0, 0, . . .)?

    Assumption 5. The source waveform is known.

    Under these assumptions, we have one equation,

    x(t) = w(t) e(t). (2 3a)

    and one unknown, the reflectivity series e(t). In reality,however, neither of the above two assumptions normallyis valid. Therefore, the convolutional model is examinedfurther in the next section, this time in the frequencydomain, to relax assumption 5.

    If the source waveform were known (such as therecorded source signature), then the solution to the de-convolution problem is deterministic. In Section 2.2, onesuch method of solving for e(t) is considered. If thesource waveform were unknown (the usual case), then

    the solution to the deconvolution problem is statistical.The Wiener prediction theory (Section 2.3) provides onemethod of statistical deconvolution.

    The Convolutional Model in

    the Frequency Domain

    The convolutional model for the noise-free seismogram(assumption 4) is represented by equation (2-3a). Con-

    volution in the time domain is equivalent to multiplica-tion in the frequency domain (Section A.1). This meansthat the the amplitude spectrum of the seismogramequals the product of the amplitude spectra of the seis-mic wavelet and the earths impulse response (Section

    B.1):Ax() = Aw()Ae(), (2 3b)

    where Ax(), Aw(), and Ae() are the amplitude spec-tra of x(t), w(t), and e(t), respectively.

    Figure 2.1-6 shows the amplitude spectra (top row)of the impulse response e(t), the seismic wavelet w(t),and the seismogram x(t). The impulse response is thesame as that shown in Figure 2.1-1d. The similarity inthe overall shape between the amplitude spectrum ofthe wavelet and that of the seismogram is apparent. Infact, a smoothed version of the amplitude spectrum ofthe seismogram is nearly indistinguishable from the am-

    plitude spectrum of the wavelet. It generally is thoughtthat the rapid fluctuations observed in the amplitudespectrum of a seismogram are a manifestation of theearths impulse response, while the basic shape is asso-ciated primarily with the source wavelet.

    Mathematically, the similarity between the ampli-tude spectra of the seismogram and the wavelet suggeststhat the amplitude spectrum of the earths impulse re-sponse must be nearly flat (Section B.1). By examin-ing the amplitude spectrum of the impulse response inFigure 2.1-6, we see that it spans virtually the entirespectral bandwidth. As seen in Figure 2.1-5, a time se-ries that represents a random process has a flat (white)

    spectrum over the entire spectral bandwidth. From closeexamination of the amplitude spectrum of the impulseresponse in Figure 2.1-6, we see that it is not entirelyflat the high-frequency components have a tendencyto strengthen gradually. Thus, reflectivity is not entirelya random process. In fact, this has been observed in thespectral properties of reflectivity functions derived froma worldwide selection of sonic logs (Walden and Hosken,1984).

    We now study the autocorrelation functions (mid-dle row, Figure 2.1-6) of the impulse response, seismicwavelet, and synthetic seismogram. Note that the au-

    tocorrelation functions of the basic wavelet and seismo-gram also are similar. This similarity is confined to lagsfor which the autocorrelation of the wavelet is nonzero.Mathematically, the similarity between the autocorrelo-gram of the wavelet and that of the seismogram suggeststhat the impulse response has an autocorrelation func-tion that is small at all lags except the zero lag (SectionB.1). The autocorrelation function of the random seriesin Figure 2.1-5 also has similar characteristics. How-ever, there is one subtle difference. When compared,Figures 2.1-5 and 2.1-6 show that autocorrelation of

  • 7/30/2019 2. Deconvolution

    13/112

    Deconvolution 171

    FIG. 2.1-6. Convolution of the earths impulse response (a) with the wavelet (b) (equation 2-2a) yields the seismogram (c)(bottom row). This process also is convolutional in terms of their autocorrelograms (middle row) and multiplicative in termsof their amplitude spectra (top row). Assumption 6 (white reflectivity) is based on the similarity between autocorrelogramsand amplitude spectra of the impulse response and wavelet.

    the impulse response has a significantly large negativelag value following the zero lag. This is not the casefor the autocorrelation of random noise. The positivepeak (zero lag) followed by the smaller negative peak inthe autocorrelogram of the impulse response arises fromthe spectral behavior discussed above. In particular, thepositive peak and the adjacent, smaller negative peak of

    the autocorrelogram together nearly act as a fractionalderivative operator (Section A.1), which has a ramp ef-fect on the amplitude spectrum of the impulse responseas seen in Figure 2.1-6.

    The above observations made on the amplitudespectra and autocorrelation functions (Figure 2.1-6) im-ply that reflectivity is not entirely a random process.Nonetheless, the following assumption almost always ismade about reflectivity to replace the statement madein assumption 5.

    Assumption 6. Reflectivity is a random process.

    This implies that the seismogram has the charac-teristics of the seismic wavelet in that their auto-correlations and amplitude spectra are similar.

    This assumption is the key to implementing the pre-dictive deconvolution. It allows the autocorrelation ofthe seismogram, which is known, to be substituted for

    the autocorrelation of the seismic wavelet, which is un-

    known. In Section 2.3, we shall see that as a result ofassumption 6, an inverse filter can be estimated directly

    from the autocorrelation of the seismogram. For thistype of deconvolution, Assumption 5, which is almostnever met in reality, is not required. But first, we needto review the fundamentals of inverse filtering.

    2.2 INVERSE FILTERING

    If a filter operator f(t) were defined such that convolu-tion of f(t) with the known seismogram x(t) yields anestimate of the earths impulse response e(t), then

    e(t) = f(t) x(t). (2 4)

    By substituting equation (2-4) into equation (2-3a), weget

    x(t) = w(t) f(t) x(t). (2 5)

    When x(t) is eliminated from both sides of the equation,the following expression results:

    (t) = w(t) f(t), (2 6)

    where (t) represents the Kronecker delta function:

    (t) =

    1, t = 0,0, otherwise.

    (2 7)

    By solving equation (2-6) for the filter operatorf(t), we obtain

    f(t) = (t) 1

    w(t). (2 8)

  • 7/30/2019 2. Deconvolution

    14/112

    172 Seismic Data Analysis

    FIG. 2.2-1. A flowchart for inverse filtering.

    Therefore, the filter operator f(t) needed to computethe earths impulse response from the recorded seismo-gram turns out to be the mathematical inverse of theseismic wavelet w(t). Equation (2-8) implies that the in-

    verse filter converts the basic wavelet to a spike at t = 0.Likewise, the inverse filter converts the seismogram to aseries of spikes that defines the earths impulse response.Therefore, inverse filtering is a method of deconvolution,provided the source waveform is known (deterministicdeconvolution). The procedure for inverse filtering is de-scribed in Figure 2.2-1.

    The Inverse of the Source Wavelet

    Computation of the inverse of the source wavelet is ac-

    complished mathematically by using the z-transform(Section A.2). For example, let the basic wavelet bea two-point time series given by w(t) : ( 1, 1

    2). The

    z-transform of this wavelet is defined by the followingpolynomial:

    W(z) = 1 1

    2z. (2 9)

    The power of variable z is the number of unit time de-lays associated with each sample in the series. The firstterm has zero delay, so z is raised to zero power. Thesecond term has unit delay, so z is raised to first power.Hence, the z-transform of a time series is a polynomialin z, whose coefficients are the values of the time sam-ples.

    A relationship exists between the z-transform andthe Fourier transform (Section A.2). The z-variable isdefined as

    z = exp

    it

    , (2 10)

    where is angular frequency and t is sampling inter-val.

    The convolutional relation in the time domaingiven by equation (2-8) means that the z-transform of

    the inverse filter, F(z), is obtained by polynomial di-vision of the z-transform of the input wavelet, W(z),given by equation (2-9) (Section A.2):

    F(z) =1

    1 12

    z= 1 +

    1

    2z +

    1

    4z2 + (2 11)

    The coefficients of F(z) : ( 1, 12

    , 14

    , . . .) represent thetime series associated with the filter operator f(t). Notethat the series has an infinite number of coefficients, al-though they decay rapidly. As in any filtering process,in practice the operator is truncated.

    First consider the first two terms in equation (2-11) which yield a two-point filter operator (1, 1

    2). The

    design and application of this operator is summarized inTable 2-3. The actual output is (1, 0, 1

    4), whereas the

    ideal result is a zero-delay spike (1, 0, 0). Although notideal, the actual result is spikier than the input wavelet,(1, 1

    2).

    Can the result be improved by including one morecoefficient in the inverse filter? As shown in Table2-4, the actual output from the three-point filter is(1, 0, 0, 1

    8). This is a more accurate representation of

    the desired output (1, 0, 0, 0) than that achieved withthe output from the two-point filter (Table 2-3). Notethat there is less energy leaking into the nonzero lags ofthe output from the three-point filter. Therefore, it isspikier. As more terms are included in the inverse filter,the output is closer to being a spike at zero lag. Sincethe number of points allowed in the operator length islimited, in practice the result never is a perfect spike.

    Table 2-3. Design and application of the truncated in-verse filter (1, 1

    2) with the input wavelet (1, 1

    2).

    Filter Design

    Input Wavelet w(t) : (1, 12

    )The zTransform W(z) = 1 1

    2z

    The Inverse F(z) = 1 + 12

    z + 14

    z2 + The Inverse Filter f(t) : (1, 1

    2, 14

    , )

    Filter Application

    Truncated Inverse Filter (1,1

    2 )Input Wavelet (1, 12

    )Actual Output (1, 0, 1

    4)

    Desired Output (1, 0, 0)Convolution Table:

    1 12

    Output

    1

    21 11

    21 01

    21 1

    4

  • 7/30/2019 2. Deconvolution

    15/112

    Deconvolution 173

    Table 2-4. Design and application of the truncated in-verse filter (1, 1

    2, 14

    ) with the input wavelet (1, 12

    ).

    Filter Design

    Input Wavelet w(t) : (1, 12

    )The zTransform W(z) = 1 1

    2

    zThe Inverse F(z) = 1 + 1

    2z + 1

    4z2 +

    The Inverse Filter f(t) : (1, 12

    , 14

    , )

    Filter Application

    Truncated Inverse Filter (1, 12

    , 14

    )Input Wavelet (1, 1

    2)

    Actual Output (1, 0, 0, 18

    )Desired Output (1, 0, 0, 0)Convolution Table:

    1 12

    Output

    1

    4

    1

    21 1

    1

    4

    1

    21 0

    1

    4

    1

    21 0

    1

    4

    1

    21 1

    8

    The inverse of the input wavelet w(t) : ( 1, 12

    )has coefficients that rapidly decay to zero (equation2-11). What about the inverse of the input waveletw(t) : ( 1

    2, 1)? Again, define the ztransform:

    W(z) = 1

    2+ z. (2 12)

    The ztransform of its inverse is given by the polyno-mial division:

    F(z) =1

    12

    + z= 2 4z 8z2 (2 13)

    As a result, the inverse filter coefficients are given bythe divergent series f(t) : (2, 4, 8, ). Truncatethis series and convolve the two-point operator with theinput wavelet ( 1

    2, 1) as shown in Table 2-5. The actual

    output is (1, 0, 4), while the desired output is (1, 0, 0).

    Not only is the result far from the desired output, butalso it is less spiky than the input wavelet (1

    2, 1). The

    reason for this poor result is that the inverse filter co-efficients increase in time rather than decay (equation2-13). When truncated, the larger coefficients actuallyare excluded from the computation.

    If we kept the third coefficient of the inverse filterin the above example (equation 2-13), then the actualoutput (Table 2-6) would be (1, 0, 0, 8), which also isa bad approximation to the desired output (1, 0, 0, 0).

    Table 2-5. Design and application of the truncated in-verse filter (2, 4) with input wavelet (1

    2, 1).

    Filter Design

    Input Wavelet w(t) : (12

    , 1)The zTransform W(z) = 1

    2+ z

    The Inverse F(z) = 2 4z 8z2 The Inverse Filter f(t) : (2, 4, 8, )

    Filter Application

    Truncated Inverse Filter (2, 4)Input Wavelet (1

    2, 1)

    Actual Output (1, 0, 4)Desired Output (1, 0, 0)Convolution Table:

    12

    1 Output

    4 2 14 2 0

    4 2 4

    Table 2-6. Design and application of the truncated in-verse filter (2, 4, 8) with input wavelet (1

    2, 1).

    Filter Design

    Input Wavelet w(t) : (12

    , 1)The zTransform W(z) = 1

    2+ z

    The Inverse F(z) = 2 4z 8z2 The Inverse Filter f(t) : (2, 4, 8, )

    Filter ApplicationTruncated Inverse Filter (2, 4, 8)Input Wavelet (1

    2, 1)

    Actual Output (1, 0, 0, 8)Desired Output (1, 0, 0, 0)Convolution Table:

    12

    1 Output

    8 4 2 18 4 2 0

    8 4 2 08 4 2 8

    Least-Squares Inverse Filtering

    A well-behaved input wavelet, such as (1, 12

    ) as op-posed to (1

    2, 1), has a z-transform whose inverse can

    be represented by a convergent series. Then the inversefiltering described above yields a good approximation toa zero-lag spike output (1, 0, 0). Can we do even betterthan that?

  • 7/30/2019 2. Deconvolution

    16/112

    174 Seismic Data Analysis

    Formulate the following problem: Given the input

    wavelet (1, 12

    ), find a two-term filter (a, b) such thatthe error between the actual output and the desired

    output (1, 0, 0) is minimum in the least-squares sense.Compute the actual output by convolving the filter

    (a, b) with the input wavelet (1, 1

    2) (Table 2-7). Thecumulative energy of the error L is defined as the sum

    of the squares of the differences between the coefficientsof the actual and desired outputs:

    L =

    a 12

    +

    b a

    2

    2+

    b

    2

    2. (2 14)

    The task is to find coefficients (a, b) so that L takesits minimum value. This requires variation of L with

    respect to the coefficients (a, b) to vanish (SectionB.5). By simplifying equation (2-14), taking the par-

    tial derivatives of quantity L with respect to a and b,and setting the results to zero, we get

    5

    2a b = 2, (2 15a)

    and

    a +5

    2b = 0. (2 15b)

    We have two equations and two unknowns; namely, the

    filter coefficients (a, b). The so-called normal set of equa-tions (2-15a) and (2-15b) can be put into the following

    convenient matrix form

    5/2 1

    1 5/2

    a

    b =

    2

    0 . (2 16)

    By solving for the filter coefficients, we obtain (a, b) :

    (0.95, 0.38). Design and application of this least-squaresinverse filter are summarized in Table 2-7.

    To quantify the spikiness of this result and compareit with the result from the inverse filter in Table 2-3,

    compute the energy of the errors made in both (Table2-8). Note that the least-squares filter yields less errorwhen trying to convert the input wavelet (1, 1

    2) to a

    spike at zero lag (1, 0, 0).We now examine the performance of the least-

    squares filter with the input wavelet (12

    , 1). Note that

    the inverse filter produced unstable results for thiswavelet (Table 2-5). We want to find a two-term fil-ter (a, b) that, when convolved with the input wavelet

    (12

    , 1), yields an estimate of the desired spike out-put (1, 0, 0) (Table 2-9). As before, the least-squares

    error between the actual output and the desired outputshould be minimal.

    The cumulative energy of the error is given by

    L =

    a

    2 1

    2+

    b

    2+ a

    2+ b2. (2 17)

    Table 2-7. Design and application of a two-term least-

    squares inverse filter (a, b).

    Filter Design

    Convolution of the filter (a, b) with input wavelet

    (1, 1

    2 ):1 1

    2Actual Output Desired Output

    b a a 1

    b a b a/2 0

    b a b/2 0

    Filter Application

    Least-Squares Filter (0.95, 0.38)

    Input Wavelet (1, 0.5)

    Actual Output (0.95, 0.09, 0.19)

    Desired Output (1, 0, 0)

    Table 2-8. Error in two-term inverse and least-squares

    filtering.

    Input: (1, 12

    )

    Desired Output: (1, 0, 0)

    Actual Output Error

    Energy

    Inverse Filter (1, 0, 0.25) 0.063

    Least-Squares Filter (0.95, 0.09, 0.19) 0.048

    Table 2-9. Design and application of a two-term least-

    squares inverse filter (a, b).

    Filter Design

    Convolution of filter (a, b) with input wavelet (12

    , 1):

    Actual Output Desired

    12

    1 Output

    b a a/2 1

    b a b/2 + a 0

    b a b 0

    Filter Application

    Least-Squares Filter (0.95, 0.19)

    Input Wavelet (0.5, 1)

    Actual Output (0.24, 0.38, 0.19)

    Desired Output (1, 0, 0)

  • 7/30/2019 2. Deconvolution

    17/112

    Deconvolution 175

    Table 2-10. Error in two-term inverse and least-squares filtering.

    Input: ( 12

    , 1)Desired Output: (1, 0, 0)

    Actual Output ErrorEnergy

    Inverse Filter (1, 0, 4) 16

    Least-Squares Filter (0.24, 0.38, 0.19) 0.762

    By simplifying equation (2-17), taking the partialderivatives of quantity L with respect to a and b, andsetting the results to zero, we obtain

    5

    2a b = 1, (2 18a)

    and

    a +5

    2b = 0. (2 18b)

    Combine equations (2-18a,b) into a matrix form5/2 11 5/2

    ab

    =

    10

    . (2 19)

    By solving for the filter coefficients, we obtain (a, b) :(0.95, 0.19). The design and application of this filterare summarized in Table 2-9.

    Table 2-10 shows the results from the inverse filter

    and least-squares filter quantified. The error made bythe least-squares filter is, again, much less than the errormade by the truncated inverse filter. However, both fil-ters yield larger errors for input wavelet ( 1

    2, 1) (Table

    2-10) as compared to errors for wavelet (1, 12

    ) (Table2-8). The reason for this is discussed next.

    Minimum Phase

    Two input wavelets, wavelet 1: (1, 12

    ) and wavelet 2:(1

    2, 1), were used for numerical analyses of the inverse

    filter and least-squares inverse filter in this section. The

    results indicate that the error in converting wavelet 1to a zero-lag spike is less than the error in convertingwavelet 2 (Tables 2-8 and 2-10).

    Is this also true when the desired ouput is a de-layed spike (0, 1, 0)? The cumulative energy of the errorL associated with the application of a two-term least-squares filter (a, b) (Table 2-11) to convert the inputwavelet (1, 1

    2) to a delayed spike (0, 1, 0) is

    L = a2 +

    b a

    2

    1

    2+

    b

    2

    2. (2 20)

    Table 2-11. Design and application of a two-term least-

    squares inverse filter (a, b).

    Filter Design

    Convolution of filter (a, b) with input wavelet (1, 12

    ):

    Actual Output Desired1 1

    2Output

    b a a 0

    b a b a/2 1

    b a b/2 0

    Filter Application

    Least-Squares Filter (0.09, 0.76)

    Input Wavelet (1, 12

    )

    Actual Output (0.09, 0.81, 0.38)

    Desired Output (0, 1, 0)

    Table 2-12. Error in least-squares filtering.

    Input Wavelet: (1, 12

    )

    Desired Output Actual Output Error

    Energy

    (1, 0, 0) (0.95, 0.09, 0.19) 0.048

    (0, 1, 0) (0.09, 0.81, 0.38) 0.190

    By simplifying equation (2-20), taking the partial

    derivatives of quantity L with respect to a and b, and

    setting the results to zero, we obtain

    5

    2a b = 1, (2 21a)

    and

    a +5

    2b = 2. (2 21b)

    Combine equations (2-21a,b) into a matrix form

    5/2 11 5/2

    ab

    =

    12

    . (2 22)

    By solving for the filter coefficients, we obtain (a, b)

    : (0.09, 0.76). The design and application of this filter

    are summarized in Table 2-11.

    Table 2-12 shows the results of the least-sqaures

    filtering to convert the input wavelet (1, 12

    ) to zero-

    lag (Table 2-7) and delayed spikes (Table 2-11). Note

  • 7/30/2019 2. Deconvolution

    18/112

    176 Seismic Data Analysis

    Table 2-13. Design and application of a two-term least-squares inverse filter (a, b).

    Filter Design

    Convolution of filter (a, b) with input wavelet (12

    , 1):

    Actual Output Desired 1

    21 Output

    b a a/2 0b a b/2 + a 1

    b a b 0

    Filter Application

    Least-Squares Filter (0.76, 0.09)Input Wavelet (0.5, 1)Actual Output (0.38, 0.81, 0.09)Desired Output (0, 1, 0)

    that the input wavelet is converted to a zero-lag spikewith less error, and the corresponding actual outputmore closely resembles a zero-lag spike desired output.

    We now examine the performance of the least-squares filter with the input wavelet(1

    2, 1). The cu-

    mulative energy of the error L associated with the ap-plication of a two-term least-squares filter (a, b) (Table2-13) to convert the input wavelet (1

    2, 1) to a delayed

    spike (0, 1, 0) is

    L = a

    22

    + b

    2

    + a 12

    + b2. (2 23)

    By simplifying equation (2-23), taking the partialderivatives of quantity L with respect to a and b, andsetting the results to zero, we obtain

    5

    2a b = 2, (2 24a)

    and

    a +5

    2b = 1. (2 24b)

    Combine equations (2-24a,b) into a matrix form

    5/2 11 5/2

    ab = 2

    1 . (2 25)

    By solving for the filter coefficients, we obtain (a, b): (0.76, 0.09). The design and application of this filterare summarized in Table 2-13.

    Table 2-14 shows the results of the least-squaresfiltering to convert the input wavelet (1

    2, 1) to zero-

    lag (Table 2-9) and delayed spikes (Table 2-13). Notethat the input wavelet is converted to a delayed spikewith less error, and the corresponding actual outputmore closely resembles a delayed spike desired output.

    Table 2-14. Error in least-squares filtering.

    Input Wavelet: (12

    , 1)

    Desired Output Actual Output ErrorEnergy

    (1, 0, 0) (0.24, 0.38, 0.19) 0.762

    (0, 1, 0) (0.38, 0.81, 0.09) 0.190

    Now, evaluate the results of the least-squares in-verse filtering summarized in Tables 2-12 and 2-14.Wavelet 1: (1, 1

    2) is closer to being a zero-delay spike

    (1, 0, 0) than wavelet 2: (12

    , 1). On the other hand,wavelet 2 is closer to being a delayed spike (0, 1, 0) thanwavelet 1. We conclude that the error is reduced if the

    desired output closely resembles the energy distributionin the input series. Wavelet 1 has more energy at theonset, while wavelet 2 has more energy concentrated atthe end.

    Figure 2.2-2 shows three wavelets with the sameamplitude spectrum, but with different phase-lag spec-tra. As a result, their shapes differ. (From Section 1.1,we know that the shape of a wavelet can be alteredby changing the phase spectrum without modifying theamplitude spectrum.) The wavelet on top has more en-ergy concentrated at the onset, the wavelet in the mid-dle has its energy concentrated at the center, and the

    wavelet at the bottom has most of its energy concen-trated at the end.We say that a wavelet is minimum phase if its en-

    ergy is maximally concentrated at its onset. Similarly,a wavelet is maximum phase if its energy is maximallyconcentrated at its end. Finally, in all in-between situa-tions, the wavelet is mixed phase. Note that a wavelet isdefined as a transient waveform with a finite duration it is realizable. A minimum-phase wavelet is one-sided it is zero before t = 0. A wavelet that is zero fort < 0 is called causal. These definitions are consistentwith intuition physical systems respond to an excita-tion only after that excitation. Their response also is of

    finite duration. In summary, a minimum-phase waveletis realizable and causal.

    These observations are quantified by consider-ing the following four, three-point wavelets (Robinson,1966):

    Wavelet A : (4, 0, 1)Wavelet B : (2, 3, 2)Wavelet C : (2, 3, 2)Wavelet D : (1, 0, 4)

  • 7/30/2019 2. Deconvolution

    19/112

    Deconvolution 177

    FIG. 2.2-2. A wavelet has a finite duration. If its energyis maximally front-loaded, then it is minimum-phase (top).If its energy is concentrated mostly in the middle, then itis mixed-phase (middle). Finally, if its energy is maximallyend-loaded, then the wavelet is maximum-phase. A quanti-tative analysis of this phase concept is provided in Figure2.2-3.

    Compute the cumulative energy of each waveletat any one time. Cumulative energy is computed byadding squared amplitudes as shown in Table 2-15.These values are plotted in Figure 2.2-3. Note that allfour wavelets have the same amount of total energy 17 units. However, the rate at which the energy builds

    Table 2-15. Cumulative energy of wavelets A,B,C,and D at time samples 0, 1 and 2.

    Wavelet 0 1 2

    A 16 16 17B 4 13 17C 4 13 17D 1 1 17

    FIG. 2.2-3. A quantitative analysis of the minimum- and

    maximum-phase concept. The fastest rate of energy build-up in time occurs when the wavelet is minimum-phase (A).The slowest rate occurs when the wavelet is maximum-phase(D).

    up is significantly different for each wavelet. For exam-ple, with wavelet A, the energy builds up rapidly closeto its total value at the very first time lag. The energyfor wavelets B and Cbuilds up relatively slowly. Finally,the energy accumulates at the slowest rate for waveletD. From Figure 2.2-3, note that the energy curves forwavelets A and D form the upper and lower boundaries.Wavelet A has the least energy delay, while wavelet D

    has the largest energy delay.

    FIG. 2.2-4. All wavelets referred to in Figure 2.2-3 (A, B,C, and D ) have the same amplitude spectrum as shownabove (Adapted from Robinson, 1966).

  • 7/30/2019 2. Deconvolution

    20/112

    178 Seismic Data Analysis

    FIG. 2.2-5. Phase-lag spectra of the wavelets referred to inFigure 2.2-3. They have the common amplitude spectrum ofFigure 2.2-4 (Adapted from Robinson, 1966).

    Given a fixed amplitude spectrum as in Figure 2.2-4, the wavelet with the least energy delay is called min-imum delay, while the wavelet with the most energy de-lay is called maximum delay. This is the basis for Robin-sons energy delay theorem: A minimum-phase wavelethas the least energy delay.

    Time delay is equivalent to a phase-lag. Figure 2.2-5 shows the phase spectra of the four wavelets. Notethat wavelet A has the least phase change across the

    frequency axis; we say it is minimum phase. WaveletD has the largest phase change; we say it is maximumphase. Finally, wavelets B and C have phase changesbetween the two extremes; hence, they are mixed phase.

    Since all four wavelets have the same amplitudespectrum (Figure 2.2-4) and the same power spectrum,they should have the same autocorrelation. This is ver-ified as shown in Table 2-16, where only one side of theautocorrelation is tabulated, since a real time series hasa symmetric autocorrelation (Section 1.1).

    Note that zero lag of the autocorrelation (Table 2-16) is equal to the total energy (Table 2-15) contained

    in each wavelet 17 units. This is true for any wavelet.In fact, Parsevals theorem states that the area underthe power spectrum is equal to the zero-lag value of theautocorrelation function (Section A.1).

    The process by which the seismic wavelet is com-pressed to a zero-lag spike is called spiking deconvolu-tion. In this section, filters that achieve this goal werestudied the inverse and the least-squares inversefilters. Their performance depends not only on filterlength, but also on whether the input wavelet is mini-mum phase.

    Table 2-16. Autocorrelation lags of wavelets A, B, C,and D.

    Wavelet A

    4 0 1 Output

    4 0 1 174 0 1 0

    4 0 1 4Wavelet B

    2 3 2 Output2 3 2 17

    2 3 2 02 3 2 4

    Wavelet C

    2 3 2 Output2 3 2 17

    2 3 2 0

    2 3 2 4Wavelet D

    1 0 4 Output1 0 4 17

    1 0 4 01 0 4 4

    The spiking deconvolution operator is strictly theinverse of the wavelet. If the wavelet were minimumphase, then we would get a stable inverse, which also is

    minimum phase. The termstable

    means that the filtercoefficients form a convergent series. Specifically, thecoefficients decrease in time (and vanish at t = );therefore, the filter has finite energy. This is the casefor the wavelet (1, 1

    2) with an inverse (1, 1

    2, 14

    , . . .). Theinverse is a stable spiking deconvolution filter. On theother hand, if the wavelet were maximum phase, thenit does not have a stable inverse. This is the case for thewavelet (1

    2, 1), whose inverse is given by the divergent

    series (2, 4, 8, . . .). Finally, a mixed-phase waveletdoes not have a stable inverse. This discussion leads usto assumption 7.

    Assumption 7. The seismic wavelet is minimumphase. Therefore, it has a minimum-phase inverse.

    Now, a summary of the implications of the under-lying assumptions for deconvolution stated in Sections2.1 and 2.2 is appropriate.

    (a) Assumptions 1, 2, and 3 allow formulating the con-volutional model of the 1-D seismogram by equa-tion (2-2).

  • 7/30/2019 2. Deconvolution

    21/112

    Deconvolution 179

    (b) Assumption 4 eliminates the unknown noise termin equation (2-2a) and reduces it to equation (2-3a).

    (c) Assumption 5 is the basis for deterministic decon-volution it allows estimation of the earths re-

    flectivity series directly from the 1-D seismogramdescribed by equation (2-3a).(d) Assumption 6 is the basis for statistical deconvolu-

    tion it allows estimates for the autocorrelogramand amplitude spectrum of the normally unknownwavelet in equation (2-3a) from the known recorded1-D seismogram.

    (e) Finally, assumption 7 provides a minimum-phaseestimate of the phase spectrum of the seismicwavelet from its amplitude spectrum, which is es-timated from the recorded seismogram by way ofassumption 6.

    Once the amplitude and phase spectra of theseismic wavelet are statistically estimated from therecorded seismogram, its least-squares inverse spik-ing deconvolution operator, is computed using opti-mum Wiener filters (Section 2.3). When applied to thewavelet, the filter converts it to a zero-delay spike.When applied to the seismogram, the filter yields theearths impulse response (equation 2-4). In Section 2.3,we show that a known wavelet can be converted into adelayed spike even if it is not minimum phase.

    2.3 OPTIMUM WIENER FILTERS

    Return to the desired output the zero-delay spike(1, 0, 0), that was considered when studying inverse andleast-squares filters (Section 2.2). Rewrite equation (2-16), which we solved to obtain the least-squares inversefilter, as follows:

    2

    5/4 1/2

    1/2 5/4

    ab

    =

    20

    . (2 26)

    Divide both sides by 2 to obtain

    5/4 1/21/2 5/4

    ab

    =

    10

    . (2 27)

    The autocorrelation of the input wavelet (1, 12

    ) isshown in Table 2-17. Note that the autocorrelation lagsare the same as the first column of the 2 2 matrix onthe left side of equation (2-27).

    Now compute the crosscorrelation of the desiredoutput (1, 0, 0) with the input wavelet (1, 1

    2) (Table

    2-18). The crosscorrelation lags are the same as the col-umn matrix on the right side of equation (2-27).

    Table 2-17. Autocorrelation lags of input wavelet

    (1, 12

    ).

    1 12

    Output

    1 1

    2

    5

    4

    1 12

    12

    Table 2-18. Crosscorrelation lags of desired output

    (1, 0, 0) with input wavelet (1, 12

    ).

    1 0 0 Output

    1 12

    1

    1 12

    0

    In general, the elements of the matrix on the left

    side of equation (2-27) are the lags of the autocorre-

    lation of the input wavelet, while the elements of the

    column matrix on the right side are the lags of the

    crosscorrelation of the desired output with the input

    wavelet.

    Now perform similar operations for wavelet( 1

    2, 1). By rewriting the matrix equation (2-19), we

    obtain

    2

    5/4 1/2

    1/2 5/4

    ab

    =

    10

    . (2 28)

    Divide both sides by 2 to obtain5/4 1/2

    1/2 5/4

    ab

    =

    1/2

    0

    . (2 29)

    The autocorrelation of wavelet (12

    , 1) is given in

    Table 2-19. The elements of the matrix on the left side of

    equation (2-29) are the autocorrelation lags of the inputwavelet. Note that autocorrelation of wavelet ( 1

    2, 1) is

    identical to that of wavelet (1, 12

    ) (Table 2-17). As

    discussed in Section 2.2, an important property of a

    group of wavelets with the same amplitude spectrum is

    that they also have the same autocorrelation.

    The crosscorrelation of the desired output (1, 0, 0)

    with input wavelet ( 12

    , 1) is given in Table 2-20. Note

    that the right side of equation (2-29) is the same as the

    crosscorrelation lags.

  • 7/30/2019 2. Deconvolution

    22/112

    180 Seismic Data Analysis

    Table 2-19. Autocorrelation lags of input wavelet(1

    2, 1).

    12

    1 Output

    1

    2 15

    4

    12

    1 12

    Table 2-20. Crosscorrelation lags of desired output(1, 0, 0) with input wavelet (1

    2, 1).

    1 0 0 Output

    12

    1 12

    12

    1 0

    Matrix equations (2-27) and (2-29) were used to de-rive the least-squares inverse filters (Section 2.2). Thesefilters then were applied to the input wavelets to com-press them to zero-lag spike. The matrices on the leftin equations (2-27) and (2-29) are made up of the au-tocorrelation lags of the input wavelets. Additionally,the column matrices on the right are made up of lagsof the crosscorrelation of the desired output a zero-lag spike, with the input wavelets. These observations

    were generalized by Wiener to derive filters that convertthe input to any desired output (Robinson and Treitel,1980).

    The general form of the matrix equation such asequation (2-29) for a filter of length n is (Section B.5):

    r0 r1 r2 rn1r1 r0 r1 rn2r2 r1 r0 rn3...

    ......

    . . ....

    rn1 rn2 rn3 r0

    a0a1a2...

    an1

    =

    g0g1g2...

    gn1

    (2 30)

    Here ri, ai, and gi, i = 0, 1, 2, . . . , n 1 are the auto-correlation lags of the input wavelet, the Wiener filtercoefficients, and the crosscorrelation lags of the desiredoutput with the input wavelet, respectively.

    The optimum Wiener filter (a0, a1, a2, . . . , an1) isoptimumin that the least-squares error between the ac-tual and desired outputs is minimum. When the de-sired output is the zero-lag spike (1, 0, 0, . . . , 0), thenthe Wiener filter is identical to the least-squares inversefilter. In other words, the least-squares inverse filter re-ally is a special case of the Wiener filter.

    The Wiener filter applies to a large class of prob-lems in which any desired output can be considered,not just the zero-lag spike. Five choices for the desiredoutput are:

    Type 1: Zero-lag spike,Type 2: Spike at arbitrary lag,Type 3: Time-advanced form of input series,Type 4: Zero-phase wavelet,Type 5: Any desired arbitrary shape.

    These desired output forms will be discussed in the fol-lowing sections.

    The general form of the normal equations (2-30)was arrived at through numerical examples for the spe-cial case where the desired output was a zero-lag spike.Section B.5 provides a concise mathematical treatmentof the optimum Wiener filters. Figure 2.3-1 outlines the

    design and application of a Wiener filter.Determination of the Wiener filter coefficients re-

    quires solution of the so-called normal equations (2-30). From equation (2-30), note that the autocorrela-tion matrix is symmetric. This special matrix, calledthe Toeplitz matrix, can be solved by Levinson recur-sion, a computationally efficient scheme (Section B.6).To do this, compute a two-point filter, derive from ita three-point filter, and so on, until the n-point filteris derived (Claerbout, 1976). In practice, filtering algo-rithms based on the optimum Wiener filter theory areknown as Wiener-Levinson algorithms.

    Spiking Deconvolution

    The process with type 1 desired output (zero-lagspike) is called spiking deconvolution. Crosscorrelationof the desired spike (1, 0, 0, . . . , 0) with input wavelet(x0, x1, x2, . . . , xn1) yields the series (x0, 0, 0, . . . , 0).

    FIG. 2.3-1. A flowchart for Wiener filter design and appli-cation.

  • 7/30/2019 2. Deconvolution

    23/112

    Deconvolution 181

    The generalized form of the normal equations (2-30)takes the special form:

    r0 r1 r2 rn1r1 r0 r1 rn2r2 r1 r0 rn3

    ... ... ... . . . ...rn1 rn2 rn3 r0

    a0a1a2

    ...an1

    =

    100

    ...0

    (2 31)

    Equation (2-31) was scaled by (1/x0). The least-squares inverse filter, which was discussed in Section 2.2,has the same form as the matrix equation (2-31). There-fore, spiking deconvolution is mathematically identicalto least-squares inverse filtering. A distinction, however,is made in practice between the two types of filtering.The autocorrelation matrix on the left side of equation(2-31) is computed from the input seismogram (assump-tion 6) in the case of spiking deconvolution (statistical

    deconvolution), whereas it is computed directly from theknown source wavelet in the case of least-squares inversefiltering (deterministic deconvolution).

    Figure 2.3-2 is a summary of spiking deconvolutionbased on the Wiener-Levinson algorithm. Frame (a) isthe input mixed-phase wavelet. Its amplitude spectrumshown in frame (b) indicates that the wavelet has mostof its energy confined to a 10- to 50-Hz range. The au-tocorrelation function shown in frame (d) is used inequation (2-31) to compute the spiking deconvolutionoperator shown in frame (e). The amplitude spectrumof the operator shown in frame (f) is approximately the

    inverse of the amplitude spectrum of the input waveletshown in frame (b). (The approximation improves as op-erator length increases.) This should b e expected, sincethe goal of spiking deconvolution is to flatten the out-put spectrum. Application of this operator to the inputwavelet gives the result shown in frame (k).

    Ideally, we would like to get a zero-lag spike, asshown in frame (n). What went wrong? Assumption 7was violated by the mixed-phase input wavelet shown inframe (a). Frame (h) shows the inverse of the deconvo-lution operator. This is the minimum-phase equivalentof the input mixed-phase wavelet in frame (a). Bothwavelets have the same amplitude spectrum shown in

    frames (b) and (i), but their phase spectra are signif-icantly different as shown in frames (c) and (j). Sincespiking deconvolution is equivalent to least-squares in-verse filtering, the minimum-phase equivalent is merelythe inverse of the deconvolution operator. Therefore, theamplitude spectrum of the operator is the inverse of theamplitude spectrum of the minimum-phase equivalentas shown in frames (f) and (i), and the phase spectrumof the operator is the negative of the phase spectrumof the minimum-phase wavelet as shown in frames (g)

    and (j). One way to extract the seismic wavelet, pro-vided it is minimum phase, is to compute the spikingdeconvolution operator and find its inverse.

    In conclusion, if the input wavelet is not minimumphase, then spiking deconvolution cannot convert it to a

    perfect zero-lag spike as in frame (k). Although the am-plitude spectrum is virtually flat as shown in frame (l),

    the phase spectrum of the output is not minimum phaseas shown in frame (m). Finally, note that the spikingdeconvolution operator is the inverse of the minimum-phase equivalent of the input wavelet. This wavelet may

    or may not be minimum phase.

    Prewhitening

    From the preceding section, we know that the ampli-

    tude spectrum of the spiking deconvolution operator is(approximately) the inverse of the amplitude spectrumof the input wavelet. This is sketched in Figure 2.3-3.What if we had zeroes in the amplitude spectrum of the

    input wavelet? To study this, apply a minimum-phaseband-pass filter (Exercise 2-10) with a wide passband(3-108 Hz) to the minimum-phase wavelet of Figure2.3-2, as shown in frame (h). Deconvolution of the fil-

    tered wavelet does not produce a perfect spike; instead,a spike accompanied by a high-frequency pre-and post-cursor results (Figure 2.3-4). This poor result occurs be-cause the deconvolution operator tries to boost the ab-

    sent frequencies, as seen from the amplitude spectrumof the output. Can this problem occur in a recorded seis-mogram? Situations in which the input amplitude spec-trum has zeroes rarely occur. There is always noise in

    the seismogram and it is additive in both the time andfrequency domains. Moreover, numerical noise, whichalso is additive in the frequency domain, is generatedduring processing. However, to ensure numerical sta-

    bility, an artificial level of white noise is added to theamplitude spectrum of the input seismogram before de-convolution. This is called prewhitening and is referredto in Figure 2.3-3.

    If the percent prewhitening is given by a scalar, 0 < 1, then the normal equations (2-31) are modified asfollows:

    r0 r1 r2 rn1r1 r0 r1 rn2r2 r1 r0 rn3...

    ......

    . . ....

    rn1 rn2 rn3 r0

    a0a1a2...

    an1

    =

    100...0

    ,

    (2 32)

  • 7/30/2019 2. Deconvolution

    24/112

    182 Seismic Data Analysis

  • 7/30/2019 2. Deconvolution

    25/112

    Deconvolution 183

    FIG. 2.3-3. Prewhitening amounts to adding a bias to the amplitude spectrum of the seismogram to be deconvolved. Thisprevents dividing by zero since the amplitude spectrum of the inverse filter (middle) is the inverse of that of the seismogram(left). Convolution of the filter with the seismogram is equivalent to multiplying their respective amplitude spectra thisyields nearly a white spectrum (right).

    where = 1 + . Adding a constant r0 to the zero lagof the autocorrelation function is the same as addingwhite noise to the spectrum, with its total energy equalto that constant. The effect of the prewhitening levelon performance of deconvolution is discussed in Section2.4.

    Wavelet Processing by Shaping Filters

    Spiking deconvolution had trouble compressing wavelet(1

    2, 1) to a zero-lag spike (1, 0, 0) (Table 2-14). In terms

    of energy distribution, this input wavelet is more sim-

    ilar to a delayed spike, such as (0, 1, 0), than it is to azero-lag spike, (1, 0, 0). Therefore, a filter that convertswavelet ( 1

    2, 1) to a delayed spike would yield less error

    than the filter that shapes it to a zero-lag spike (Table2-14).

    Recast the filter design and application outlinedin Table 2-13 in terms of optimum Wiener filters byfollowing the flowchart in Figure 2.3-1. First, computethe crosscorrelation (Table 2-21). From Table 2-19, weknow the autocorrelation of the input wavelet. By sub-stituting the results from Tables 2-19 and 2-21 into thematrix equation (2-30), we get

    5/4

    1/21/2 5/4

    ab

    =

    11/2

    . (2

    33)

    By solving for the filter coefficients, we obtain (a, b) :( 1621, 2

    21). This filter is applied to the input wavelet as

    shown in Table 2-22. As we would expect, the output isthe same as that of the least-squares filter (Table 2-13).Note that, from Table 2-14, the energy of the least-squares error between the actual and desired outputswas 0.190 and 0.762 for a delayed spike and a zero-lag spike desired output, respectively. This shows that

    there is less error when converting wavelet (12

    , 1) to the

    delayed spike (0, 1, 0) than to zero-lag spike (1, 0, 0).

    In general, for any given input wavelet, a series of

    desired outputs can be defined as delayed spikes. The

    least-squares errors then can be plotted as a function of

    delay. The delay (lag) that corresponds to the least er-

    ror is chosen to define the desired delayed spike output.

    The actual output from the Wiener filter using this opti-

    mum delayed spike should be the most compact possible

    result.

    Table 2-21. Crosscorrelation lags of desired output(0, 1, 0) with input wavelet (12, 1).

    0 1 0 Output

    1

    21 1

    1

    21 1

    2

    Table 2-22. Convolution of input wavelet (12, 1) with

    filter coefficients ( 1621, 2

    21).

    1

    21 Output

    2

    21

    16

    210.38

    2

    21

    16

    210.81

    2

    21

    16

    210.09

  • 7/30/2019 2. Deconvolution

    26/112

    184 Seismic Data Analysis

    FIG. 2.3-4. (a) Minimum-phase wavelet, (b) after band-pass filtering, (c) followed by deconvolution. The amplitude spectrumof the band-pass filtered wavelet is zero above 108 Hz (middle row); therefore, the inverse filter derived from it yields unstableresults (bottom row). The time delays on the wavelets in the left frames of the middle and bottom rows are for display

    purposes only.

    The process that has a type 5 desired output (anydesired arbitrary shape) is called wavelet shaping. Thefilter that does this is called a Wiener shaping filter.In fact, type 2 (delayed spike) and type 4 (zero-phasewavelet) desired outputs are special cases of the moregeneral wavelet shaping.

    Figure 2.3-5 shows a series of wavelet shapings thatuse delayed spikes as desired outputs. The input is amixed-phase wavelet. Filter length was held constantin all eight cases. Note that the zero-delay spike case

    (spiking deconvolution) does not always yield the bestresult (Figure 2.3-5a). A delay in the neighborhood of60 ms (Figure 2.3-5e) seems to yield an output that isclosest to being a perfect spike. Typically, the processis not very sensitive to the amount of delay once it isclose to the optimum delay. If the input wavelet wereminimum-phase, then the optimum delay of the desiredoutput spike generally is zero. On the other hand, if theinput wavelet were mixed-phase, as illustrated in Fig-ure 2.3-5, then the optimum delay is nonzero. Finally, if

    the input wavelet were maximum-phase, then the opti-mum delay is the length of that wavelet (Robinson andTreitel, 1980).

    Can we not delay the desired spike output (Fig-ure 2.3-5) and obtain a better result than we obtainedfrom spiking deconvolution? This goal is achieved byapplying a constant-time shift (60 ms in Figure 2.3-5)to a delayed spike result. Better yet, the same resultcan be obtained by shifting the shaping filter operatoras much as the delay in the spike and applying it to the

    input wavelet. Such a filter operator is two-sided (non-causal), since it has coefficients for negative and positivetime values. The one-sided filter defined along the posi-tive time axis has an anticipation component, while thefilter defined along the negative time axis has a mem-ory component (Robinson and Treitel, 1980). The two-sided filter has an anticipation component and a mem-ory component. Figure 2.3-6 shows a series of shapingfilterings with two-sided Wiener filters for various spikedelay values.

  • 7/30/2019 2. Deconvolution

    27/112

    Deconvolution 185

    FIG. 2.3-5. Shaping filtering. (0) Input wavelet, (1) desired

    output, (2) shaping filter operator, (3) actual output. Here,

    the purpose is to convert the mixed-phased wavelet (0) to

    a series of delayed spikes as shown in (a) through (h) by

    using a one-sided operator (anticipation component only).

    The best result is with a 60-ms delay (e).

    FIG. 2.3-6. Shaping filtering. (0) Input wavelet, (1) desiredoutput, (2) shaping filter operator, (3) actual output. Here,the purpose is to convert the mixed-phase wavelet (0) to aseries of delayed spikes as shown in (a) through (h) using atwo-sided operator (with memory and anticipation compo-nents). The best result is obtained with a zero-delay spikeusing a two-sided filter (a).

    Figure 2.3-7 shows examples of wavelet shaping.The input wavelet represented by trace (b) is the samemixed-phase wavelet as in Figure 2.3-6 (top left frame).This wavelet is shaped into zero-phase wavelets withthree different bandwidths represented by traces (c),(d) and (e). This process commonly is referred to as de-phasing. Figure 2.3-7 shows another wavelet shaping inwhich the input wavelet is converted to its minimum-phase equivalent represented by trace (f). This conver-sion is often applied to recorded air-gun signatures.

    Figure 2.3-8 shows examples of a recorded air-gun signature that was shaped into its minimum-phase

    equivalent and into a spike. When the input is therecorded signature, then the wavelet shapings in Fig-ure 2.3-8 are called signature processing.

    Wavelet shaping requires knowledge of the inputwavelet to compute the crosscorrelation column on theright side of equation (2-30). If it is unknown, which isthe case in reality, then the minimum-phase equivalentof the input wavelet can be estimated statistically fromthe data. This minimum-phase estimate then is shapedto a zero-phase wavelet.

    Wavelet processingis a term that is used with flex-ibility. The most common meaning refers to estimat-ing (somehow) the basic wavelet embedded in the seis-mogram, designing a shaping filter to convert the esti-mated wavelet to a desired form, usually a broad-bandzero-phase wavelet (Figure 2.3-8), and finally, applyingthe shaping filter to the seismogram. Another type ofwavelet processing involves wavelet shaping in whichthe desired output is the zero-phase wavelet with thesame amplitude spectrum as that of the input wavelet(Figure 2.3-9). Note that this type of wavelet processingdoes not try to flatten the spectrum, but only tries tocorrect for the phase of the input wavelet, which some-times is assumed to be minimum-phase.

    Predictive Deconvolution

    The type 3 desired output, a time-advanced form of theinput series, suggests a prediction process. Given theinput x(t), we want to predict its value at some futuretime (t + ), where is prediction lag. Wiener showed

  • 7/30/2019 2. Deconvolution

    28/112

    186 Seismic Data Analysis

    FIG. 2.3-7. Shaping filtering with various desired outputs. (a) Impulse response, (b) input seismogram. Here, (c), (d) and(e) show three possible desired outputs that are band-limited zero-phase wavelets, while (f) shows a desired output that isthe minimum-phase equivalent of the input wavelet (b). Finally, (g) and (h) are desired outputs that are band-pass filteredversions of (f).

    that the filter used to estimate x(t+) can be computed

    by using a special form of the matrix equation (2-30)

    (Robinson and Treitel, 1980). Since the desired output

    x(t + ) is the time-advanced version of the input x(t),

    we need to specialize the right side of equation (2-30)

    for the prediction problem.

    Consider a five-point input time series x(t) :

    (x0, x1, x2, x3, x4), and set = 2. The autocorrelation

    of the input series is computed in Table 2-23, and the

    crosscorrelation between the desired output x(t+2) and

    the input x(t) is computed in Table 2-24. Compare the

    results in Tables 2-23 and 2-24, and note that gi = ri+for = 2 and i = 0, 1, 2, 3, 4.

    Equation (2-30), for this special case, is rewritten

    as follows:

    r0 r1 r2 r3 r4r1 r0 r1 r2 r3r2 r1 r0 r1 r2r3 r2 r1 r0 r1r4 r3 r2 r1 r0

    a0a1a2a3a4

    =

    r2r3r4r5r6

    . (2 34)

    The prediction filter coefficients a(t) : (a0, a1,

    a2, a3, a4) can be computed from equation (2-34) and

    applied to the input series x(t) : (x0, x1, x2, x3, x4) to

    compute the actual output y(t) : (y0, y1, y2, y3, y4) (Ta-

    ble 2-25). We want to predict the time-advanced form of

    the input; hence, the actual output is an estimate of the

    series x(t + ) : (x2, x3, x4), where = 2. The predic-

    tion error series e(t) = x(t+) y(t) : (e2, e3, e4, e5, e6)

    is given in Table 2-26.

    The results in Table 2-26 suggest that the error se-

    ries can be obtained more directly by convolving theinput series x(t) : (x0, x1, x2, x3, x4) with a filter with

    coefficients (1, 0, a0, a1, a2, a3, a4) (Table 2-27).

    The results for (e2, e3, e4, e5, e6) are identical (Tables

    2-26 and 2-27). Since the series (a0, a1, a2, a3, a4) is

    called the prediction filter, it is natural to call the se-

    ries (1, 0, a0, a1, a2, a3, a4) the prediction error

    filter. When applied to the input series, this filter yields

    the error series in the prediction process (Table 2-27).

  • 7/30/2019 2. Deconvolution

    29/112

    Deconvolution 187

    FIG. 2.3-8. Signature processing: (a) Recorded signature,

    (b) desired output, (c) shaping operator, (d) shaped signa-

    ture. The desired output is a zero-delay spike (top) and the

    minimum-phase equivalent of the recorded signature (bot-

    tom).

    Table 2-23. Autocorrelation lags of input series x(t) :

    (x0, x1, x2, x3, x4).

    r0 = x20 + x21 + x22 + x23 + x24r1 = x0x1 + x1x2 + x2x3 + x3x4

    r2 = x0x2 + x1x3 + x2x4

    r3 = x0x3 + x1x4

    r4 = x0x4

    r5 = 0

    r6 = 0

    Table 2-24. Crosscorrelation of desired output x(t +) : (x2, x3, x4), = 2, with input x(t) : (x0, x1, x2,x3, x4).

    g0 = x0x2 + x1x3 + x2x4

    g1 = x0x3 + x1x4

    g2 = x0x4

    g3 = 0

    g4 = 0

    Table 2-25. Convolution of prediction filter a(t) : (a0,a1, a2, a3, a4) with input series x(t) : (x0, x1, x2, x3, x4)to compute actual output y(t) : (y0, y1, y2, y3, y4).

    y0 = a0x0y1 = a1x0 + a0x1

    y2 = a2x0 + a1x1 + a0x2

    y3 = a3x0 + a2x1 + a1x2 + a0x3

    y4 = a4x0 + a3x1 + a2x2 + a1x3 + a0x4

    Table 2-26. The error series e(t) = x(t + ) y(t) :(e2, e3, e4, e5, e6), = 2. For y(t), see Table 2-25.

    e2 = x2

    a0x0e3 = x3 a1x0 a0x1

    e4 = x4 a2x0 a1x1 a0x2

    e5 = 0 a3x0 a2x1 a1x2 a0x3

    e6 = 0 a4x0 a3x1 a2x2 a1x3 a0x4

    Table 2-27. Convolution of prediction error filter coef-ficients (1, 0, a0, a1, a2, a3,a4) with input seriesx(t) : (x0, x1, x2, x3, x4).

    e0 = x0

    e1 = x1

    e2 = x2 a0x0

    e3 = x3 a1x0 a0x1

    e4 = x4 a2x0 a1x1 a0x2

    e5 = 0 a3x0 a2x1 a1x2 a0x3

    e6 = 0 a4x0 a3x1 a2x2 a1x3 a0x4

  • 7/30/2019 2. Deconvolution

    30/112

    188 Seismic Data Analysis

    FIG. 2.3-9. Wavelet processing. An autocorrelogram (a),estimated from the seismic trace, is used after smoothing (b)to compute the spiking deconvolution operator (d). Here (c)is just a one-sided version of (b). The inverse of the operator(d) is the minimum-phase wavelet (e), which is sometimesassumed to be the basic wavelet contained in the originalseismic trace. It is easy to compute its zero-phase equiv-alent (f) and design a shaping filter (g) that converts the

    minimum-phase wavelet (e) to the zero-phase wavelet (f).The actual output is (h), which should be compared with(f). The zero-phase equivalent (f) has the same amplitudespectrum as the minimum-phase wavelet (e).

    Why place so much emphasis on the error series?Consider the prediction process as it relates to a seis-mic trace. From the past values of a time series up totime t, a future value can be predicted at time t + ,where is the prediction lag. A seismic trace often has apredictable component (multiples) with a periodic rate

    FIG. 2.3-10. A flowchart for predictive deconvolution using

    a prediction filter.

    of occurrence. According to assumption 6, anything else,such as primary reflections, is unpredictable.

    Some may claim that reflections are predictable as

    well; this may be the case if deposition is cyclic. How-

    ever, this type of deposition is not often encountered.

    While the prediction filter yields the predictable com-

    ponent (the multiples) of a seismic trace, the remaining

    unpredictable part, the error series, is essentially the

    reflection series.

    Equation (2-34) can be generalized for the case of

    an n-long prediction filter and an -long prediction lag.

    r0 r1 r2

    rn

    1r1 r0 r1 rn2r2 r1 r0 rn3...

    ......

    . . ....

    rn1 rn2 rn3 r0

    a0a1a2...

    an1

    =

    r

    r+1r+2

    ...r+n1

    (2 35)

    Note that design of the prediction filters requires only

    autocorrelation of the input series.

    There are two approaches to predictive deconvolu-

    tion:

    (1) The prediction filter (a0, a1, a2, . . . , an1) may be

    designed using equation (2-35) and applied on in-put series as described in Figure 2.3-10.

    (2) Alternatively, the prediction error filter (1, 0, 0, . . . ,

    0, a0,a1, a2, . . . , an1) can be designed and

    convolved with the input series as described in Fig-

    ure 2.3-11.

    Now consider the special case of unit prediction lag,

    = 1. For n = 5, equation (2-35) takes the following

    form:

  • 7/30/2019 2. Deconvolution

    31/112

    Deconvolution 189

    FIG. 2.3-11. A flowchart for predictive deconvolution usinga prediction error filter.

    r0 r1 r2 r3 r4r1 r0 r1 r2 r3r2 r1 r0 r1 r2r3 r2 r1 r0 r1r4 r3 r2 r1 r0

    a0a1a2a3a4

    =

    r1r2r3r4r5

    . (2 36)

    By augmenting the right side to the left side, we obtain:

    r1 r0 r1 r2 r3 r4r2 r1 r0 r1 r2 r3r3 r2 r1 r0 r1 r2r4 r3 r2 r1 r0 r1r5 r4 r3 r2 r1 r0

    1a0a1a2a3a4

    =

    000000

    .

    (2 37)

    Add one row and move the negative sign to the columnmatrix that represents the filter coefficients to get:

    r0 r1 r2 r3 r4 r5r1 r0 r1 r2 r3 r4r2 r1 r0 r1 r2 r3r3 r2 r1 r0 r1 r2r4 r3 r2 r1 r0 r1r5 r4 r3 r2 r1 r0

    1a0a1

    a2

    a3a4

    =

    L00

    000

    .

    (2 38)

    where L = r0 r1a0 r2a1 r3a2 r4a3 r5a4.Note that there are six unknowns, (a0, a1, a2, a4, a5, L),

    and six equations. Solution of these equations yields

    the unit-delay prediction error filter (1, a0, a1,

    a2,a4, a5), and the quantity L the error in the

    filtering process (Section B.5). We can rewrite equation

    (2-38) as follows:

    r0 r1 r2 r3 r4 r5r1 r0 r1 r2 r3 r4r2 r1 r0 r1 r2 r3r3 r2 r1 r0 r1 r2

    r4 r3 r2 r1 r0 r1r5 r4 r3 r2 r1 r0

    b0b1b2b3

    b4b5

    =

    L000

    00

    .

    (2 39)where b0 = 1, bi = ai, and i = 1, 2, 3, 4, 5. This equa-tion has a familiar structure. In fact, except for the scalefactor L, it has the same form as equation (2-31), whichyields the coefficients for the least-squares zero-delay in-verse filter. This inverse filter is therefore the same asthe prediction error filter with unit prediction lag, ex-cept for a scale factor. Hence, spiking deconvolution ac-tually is a special case of predictive deconvolution withunit prediction lag.

    We now know that predictive deconvolution is a

    general process that encompasses spiking deconvolu-tion. In general, the following statement can be made:Given an input wavelet of length (n+), the predictionerror filter contracts it to an -long wavelet, where is the prediction lag(Peacock and Treitel, 1969). When = 1, the procedure is called spiking deconvolution.

    Figure 2.3-12 interrelates the various filters dis-cussed in this chapter and indicates the kind of processthey imply. From Figure 2.3-12, note that Wiener fil-ters can be used to solve a wide range of problems. Inparticular, predictive deconvolution is an integral partof seismic data processing that is aimed at compressingthe seismic wavelet, thereby increasing temporal reso-

    lution. In the limit, it can be used to spike the seismicwavelet and obtain an estimate for reflectivity.

    2.3-12. A flowchart for interrelations between various de-convolution filters.

  • 7/30/2019 2. Deconvolution

    32/112

    190 Seismic Data Analysis

    2.4 PREDICTIVE DECONVOLUTION

    IN PRACTICE

    It now is appropriate to review the implications of theassumptions stated in Sections 2.1 and 2.2 that under-

    lie the process of deconvolution within the context ofpredictive deconvolution.

    (a) Assumptions 1, 2, and 3 are the basis for the convo-lutional model of the recorded seismogram (Section2.1). In practice, deconvolution often yields goodresults in areas where these three assumptions arenot strictly valid.

    (b) Assumption 3 can be relaxed in practice by consid-ering a time-variant deconvolution (Section 2.6).In this technique, a seismogram is divided into anumber of time gates, typically three or more. De-convolution operators then are designed from eachgate and convolved with data within that gate. Al-ternatively, time-variant spectral whitening can beused to account for nonstationarity (Section 2.6).

    (c) Not much can be done about assumption 4. How-ever, noise can be minimized in the recording pro-cess. Deconvolution operators can be designed us-ing time gates and frequency bands with low noiselevels. Poststack deconvolution can be used in aneffort to take advantage of the noise reduction in-herent in the stacking process.

    (d) If the source wavelet were minimum-phase andknown (assumption 5), then a perfect result could

    be obtained from deconvolution in the noise-freecase as in trace (c) of Figures 2.4-1 and 2.4-2.(e) If assumption 6 were violated and if the source

    waveform were not known, then you would haveproblems as in trace (d) of Figures 2.4-1 and 2.4-2.

    (f) The quality of the output from spiking deconvolu-tion is degraded further when the source wavelet isnot minimum-phase as in Figures 2.4-3 and 2.4-4;that is, when assumption 7 is violated.

    (g) Finally, in addition to violating assumptions 5 and7, if there were noise in the data, that is, when as-sumption 4 is violated, then the result of the decon-volution would be unacceptable as in Figure 2.4-5.

    Figures 2.4-1 through 2.4-5 test our confidence inthe usefulness of predictive deconvolution. In reality,deconvolution has been applied to billions of seismictraces; most of the time it has yielded satisfactory re-sults. Figures 2.4-1 through 2.4-5 emphasize the criti-cal assumptions that underlie predictive deconvolution.When deconvolution does not work on some data, themost probable reason is that one or more of the aboveassumptions has been violated. In the remaining partof this section, a series of numerical experiments will be

    performed to examine the validity of these assumptions.The purpose of these experiments is to gain a basic un-derstanding of deconvolution from a practical point ofview.

    Operator Length

    We start with a single, isolated minimum-phase waveletas in trace (b) of Figure 2.4-6. Assumptions 1 through 5are satisfied for this wavelet. The ideal result of spikingdeconvolution is a zero-lag spike, as indicated by trace(a). In this and the following numerical analyses, werefer to the autocorrelogram and amplitude spectrum(plotted with linear scale) of the output from each de-convolution test to better evaluate the results. In Figure2.4-6 and the following figures, n, , and refer to oper-

    ator length of the prediction filter, prediction lag, andpercent prewhitening, respectively. The length of theprediction error filter then is n + .

    In Figure 2.4-6, prediction lag is unity and equalto the 2-ms sampling rate, prewhitening is 0%, and op-erator length varies as indicated in the figure. Shortoperators yield spikes with small-amplitude and rela-tively high-frequency tails. The 128-ms-long operatorgives an almost perfect spike output. Longer operatorswhiten the spectrum further, bringing it closer to thespectrum of the impulse response.

    The action of spiking deconvolution on the seismo-gram derived by convolving the minimum-phase wavelet

    with a sparse-spike series is similar (Figure 2.4-7) to thecase of the single isolated wavelet (Figure 2.4-6). Recallthat spiking deconvolution basically is inverse filteringwhere the operator is the least-squares inverse of