a prediction problem
Post on 14-Feb-2017
225 Views
Preview:
TRANSCRIPT
Professor A G
Constantinides© 1
AGCDSP
A Prediction Problem Problem: Given a sample set of a stationary
processes
to predict the value of the process some time into the future as
The function may be linear or non-linear. We concentrate only on linear prediction functions
]}[],...,2[],1[],[{ Mnxnxnxnx
])[],...,2[],1[],[(][ Mnxnxnxnxfmnx
Professor A G
Constantinides© 2
AGCDSP
A Prediction Problem Linear Prediction dates back to Gauss in
the 18th century. Extensively used in DSP theory and
applications (spectrum analysis, speech processing, radar, sonar, seismology, mobile telephony, financial systems etc)
The difference between the predicted and actual value at a specific point in time is caleed the prediction error.
Professor A G
Constantinides© 3
AGCDSP
A Prediction Problem The objective of prediction is: given
the data, to select a linear function that minimises the prediction error.
The Wiener approach examined earlier may be cast into a predictive form in which the desired signal to follow is the next sample of the given process
Professor A G
Constantinides© 4
AGCDSP
Forward & Backward Prediction
If the prediction is written as
Then we have a one-step forward prediction
If the prediction is written as
Then we have a one-step backward prediction
])[],...,2[],1[(][ˆ Mnxnxnxfnx
])1[],...,2[],1[],[(][ˆ MnxnxnxnxfMnx
Professor A G
Constantinides© 5
AGCDSP
Forward Prediction Problem
The forward prediction error is then
Write the prediction equation as
And as in the Wiener case we minimise the second order norm of the prediction error
][ˆ][][ nxnxne f
M
kknxkwnx
1][][][ˆ
Professor A G
Constantinides© 6
AGCDSP
Forward Prediction Problem
Thus the solution accrues from
Expanding we have
Differentiating with resoect to the weight vector we obtain
}])[ˆ][{(min}])[{(min 22 nxnxEneEJ f ww
}])[ˆ{(])[ˆ][{(2}])[{(min 22 nxEnxnxEnxEJ w
}][ˆ][ˆ{2)][ˆ][{(2iii wnxnxE
wnxnxE
wJ
Professor A G
Constantinides© 7
AGCDSP
Forward Prediction Problem
However
And hence
or
][][ˆ inxwnx
i
]}[][ˆ{2])[][{(2 inxnxEinxnxEwJ
i
]}[][][{2])[][{(21
inxknxkwEinxnxEwJ M
ki
Professor A G
Constantinides© 8
AGCDSP
Forward Prediction Problem
On substituting with the correspending correlation sequences we have
Set this expression to zero for minimisation to yield
M
kxx
i
kirkwirwJ
1][][2][2
Miirkirkw xx
M
kxx ,...,3,2,1][][][
1
Professor A G
Constantinides© 9
AGCDSP
Forward Prediction Problem
These are the Normal Equations, or Wiener-Hopf , or Yule-Walker equations structured for the one-step forward predictor
In this specific case it is clear that we need only know the autocorrelation propertities of the given process to determine the predictor coefficients
Professor A G
Constantinides© 10
AGCDSP
Forward Prediction Filter Set
And rewrite earlier expression as
These equations are sometimes known as the augmented forward prediction normal equations
MmMmmw
mmaM
0,..,1][01
][
Mkkr
kmrma xxM
mxxM ,...,2,10
0]0[][][
0
Professor A G
Constantinides© 11
AGCDSP
Forward Prediction Filter The prediction error is then given
as
This is a FIR filter known as the prediction-error filter
M
mMf knxkane
0][][][
MMMf zMazazazA ][...]2[]1[1)( 21
1
Professor A G
Constantinides© 12
AGCDSP
Backward Prediction Problem
In a similar manner for the backward prediction case we write
And
Where we assume that the backward predictor filter weights are different from the forward case
][ˆ][][ MnxMnxneb
M
kknxkwMnx
1]1[][~][ˆ
Professor A G
Constantinides© 13
AGCDSP
Backward Prediction Problem
Thus on comparing the the forward and backward formulations with the Wiener least squares conditions we see that the desirable signal is now
Hence the normal equations for the backward case can be written as
][ Mnx
MkkMrkmrmw xx
M
mxx ,...,3,2,1]1[][][~
1
Professor A G
Constantinides© 14
AGCDSP
Backward Prediction Problem
This can be slightly adjusted as
On comparing this equation with the corresponding forward case it is seen that the two have the same mathematical form and
Or equivalently
MkkrmkrmMw xx
M
mxx ,...,3,2,1][][]1[~
1
MmmMwmw ,...,2,1]1[~][
MmmMwmw ,...,2,1]1[][~
Professor A G
Constantinides© 15
AGCDSP
Backward Prediction Filter Ie backward prediction filter has the same
weights as the forward case but reversed.
This result is significant from which many properties of efficient predictors ensue.
Observe that the ratio of the backward prediction error filter to the forward prediction error filter is allpass.
This yields the lattice predictor structures. More on this later
MMMMb zzMazMaMazA ...]2[]1[][)( 21
Professor A G
Constantinides© 16
AGCDSP
Levinson-Durbin Solution of the Normal Equations The Durbin algorithm solves the following
Where the right hand side is a column of as in the normal equations.
Assume we have a solution for
Where
mmm rwR
R
mkkkk 1rwRT
kk rrrr ],...,,,[ 321r
Professor A G
Constantinides© 17
AGCDSP
Levinson-Durbin For the next iteration the normal equations
can be written as
Where
Set
110
kk
k
rrw
JrrJR
kTk
*kk
11
k
kk r
rr
k
kk
zw 1
kJIs the k-order counteridentity
Professor A G
Constantinides© 18
AGCDSP
Levinson-Durbin Multiply out to yield
Note that
Hence
Ie the first k elements of are adjusted versions of the previous solution
** rJRwrJrRz kkkkkkkkkkk11 )(
11 kkkk RJJR*wJwz kkkkk
1kw
Professor A G
Constantinides© 19
AGCDSP
Levinson-Durbin The last element follows from the
second equation of
Ie
10 k
k
k
kk
rrrw
JrrJR
kTk
*kk
)(1 10
kkkkk rr
zJrT
Professor A G
Constantinides© 20
AGCDSP
Levinson-Durbin
The parameters are known as the reflection coefficients.
These are crucial from the signal processing point of view.
k
Professor A G
Constantinides© 21
AGCDSP
Levinson-Durbin
The Levinson algorithm solves the problem
In the same way as for Durbin we keep track of the solutions to the problems
byR m
kkk byR
Professor A G
Constantinides© 22
AGCDSP
Levinson-Durbin
Thus assuming , to be known at the k step, we solve at the next step the problem
10 k
k
k
kk
brbv
JrrJR
kTk
*kk
kw ky
Professor A G
Constantinides© 23
AGCDSP
Levinson-Durbin
Where
Thus
k
kk
vy 1
** yJyrJbRv kkkkkkkkkk )(1
*0
1
kTk
kkTkk
k rb
yryJr
Professor A G
Constantinides© 24
AGCDSP
Lattice Predictors Return to the lattice case. We write
or)()()(zAzAzT
f
bM
MMM
MMMM
M zMazazazzMazMaMazT
][...]2[]1[1...]2[]1[][)( 21
1
21
Professor A G
Constantinides© 25
AGCDSP
Lattice Predictors The above transfer function is allpass of order
M. It can be thought of as the reflection coeffient
of a cascade of lossless transmission lines, or acoustic tubes.
In this sense it can furnish a simple algorithm for the estimation of the reflection coefficients.
We strat with the observation that the transfer function can be written in terms of another allpass filter embedded in a first order allpass structure
Professor A G
Constantinides© 26
AGCDSP
Lattice Predictors This takes the form
Where is to be chosen to make of degree (M-1) .
From the above we have
)(1)()(
11
1
11
1
zTzzTzzT
M
MM
1 )(1 zTM
))(1()()(1
11
1 zTzzTzT
M
MM
Professor A G
Constantinides© 27
AGCDSP
Lattice Predictors And hence
Where
Thus for a reduction in the order the constant term in the numerator, which is also equal to the highest term in the denominator, must be zero.
)][...]2[]1[1(...]1[][()(
12
11
11
111
MMMM
MMM
M zMazazazzzMaMazT
][1][][][
1
11 Ma
rMararaM
MMM
Professor A G
Constantinides© 28
AGCDSP
Lattice Predictors This requirement yields The realisation structure is
][1 MaM
)(zTM
)(1 zTM 1z
1
Professor A G
Constantinides© 29
AGCDSP
Lattice Predictors There are many rearrangemnets that can be
made of this structure, through the use of Signal Flow Graphs.
One such rearrangement would be to reverse the direction of signal flow for the lower path. This would yield the standard Lattice Structure as found in several textbooks (viz. Inverse Lattice)
The lattice structure and the above development are intimately related to the Levinson-Durbin Algorithm
Professor A G
Constantinides© 30
AGCDSP
Lattice Predictors The form of lattice presented is not the
usual approach to the Levinson algorithm in that we have developed the inverse filter.
Since the denominator of the allpass is also the denominator of the AR process the procedure can be seen as an AR coefficient to lattice structure mapping.
For lattice to AR coefficient mapping we follow the opposite route, ie we contruct the allpass and read off its denominator.
Professor A G
Constantinides© 31
AGCDSP
PSD Estimation It is evident that if the PSD of the
prediction error is white then the prediction transfer function multiplied by the input PSD yields a constant.
Therefore the input PSD is determined. Moreover the inverse prediction filter
gives us a means to generate the process as the output from the filter when the input is white noise.
top related