chapter 27: linear filtering - part i: kalman filter

33
Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics

Upload: wanda

Post on 11-Jan-2016

65 views

Category:

Documents


8 download

DESCRIPTION

Chapter 27: Linear Filtering - Part I: Kalman Filter. Standard Kalman filtering – Linear dynamics. Kalman Filter. Model dynamics – discrete time x k+1 = M(x k ) + W k+1 or Mx k + w k+1 x k : true state w k : model error x k ∊ R n , M: R n  R n , M ∊ R n x n Observation - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Chapter 27: Linear Filtering - Part I: Kalman Filter

Standard Kalman filtering – Linear dynamics

Page 2: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Kalman Filter

Model dynamics – discrete time xk+1 = M(xk) + Wk+1 or Mxk + wk+1

xk: true state

wk: model error

xk ∊ Rn, M: Rn Rn, M ∊ Rn x n

Observation zk = h(xk) + vk or Hxk + vk

h: Rn Rm, H ∊ Rm x n , vk ∊ Rn

E(vk) = 0, COV(vk) = Rk

Page 3: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Filtering, Smoothing, Prediction

FN = {zi | 1 ≤ I ≤ N } [Wiener, 1942]

[Kolmogorov, 1942]

k < N

Smoothing

k = N

filtering

k > N

Prediction

Go to page 464 – classification

Page 4: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Statement of Problem – Linear case x0 ~ N(m0, P0)

xk+1 = Mkxk + wk+1, wk ~ N(0, Qk)

zk = Hkxk + vk, vk ~ N(0, Rk)

Given Fk = { zj | 1 ≤ j ≤ k }, find the best estimate of xk that minimizes the mean squared error

E[(xk - )T(xk - )] = tr[ E(xk - ) (xk - )T] = tr[ ]

If is also unbiased => it is min. variance!

Page 5: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Model Forecast Step

At time k = 0, F0 – initial information is given

Given , the predictable part of x1 is

Error in prediction

0 1

Page 6: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Forecast covariance

Covariance

Predicted Observation E[z1/ x1= x1

f] = E[H1x1+v1|x1=x1f] = H1x1

f

COV(z1|x1f) = E{[z1 – E(z1|x1

f)][z1 – E(z1|x1f)]T}

= E(v1v1T) = R1

Page 7: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Basic idea

At time k = 1

Fast Forward Time k-1 to k

0

x1f z1

P1f R1

1

k-1 k

xkf zk

Pkf Rk

Page 8: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Forecast from k-1 to k

Page 9: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Observations at time k

Model predicted observations

Page 10: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Data Assimilation Step

prior Kalman Gain Innovation

Page 11: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Posterior estimate – also known as analysis Substitute and simplify

Page 12: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Covaiance of analysis

=>

where

Page 13: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Conditions on the Kalman gain – minimization of total variance Kk = Pk

fHkTDk

-1 = PkfHk

T[HkPkfHk

T + Rk]-1

1.

2.

Page 14: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Comments on the Kalman on the Kalman gaingain 3. An Interpretation of Kk: n = m, Hk = I

Let Pkf = Diag [ P11

f P22f … Pnn

f]

Rk = Diag [ R11 R22 … Rnn]

Kk = PkfHk

T[HkPkfHk

T + Rk]-1

= Pkf[Pk

f + Rk]-1

= Diag[ P11f/(P11

f+R11) , P22f/(P22

f+R22), ….., Pnnf/(Pnn

f+Rnn)]

= xkf + Kk[z-Hxk

f] = xkf + Kk[z-xk

f] = (I-Kk)xkf + Kkz

∴ If Piif is large, zi,k has a larger weight

Page 15: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Comments – special cases

4. is independent of observations 5. No observations

xkf = Pk

f = for all k ≥ 0

xkf = Mk-1xk-1

f = Mk-1Mk-2Mk-3…..M1M0x0f

Pkf = Mk-1Pk-1

fMk-1T + Qk

= M(k-1:0)P0MT(k-1:0) + ∑ M(k-1:j+1)Qj+1MT(k-1:j+1)

where M(i,j) = MiMi-1Mi-2….Mj, Qj ≡ 0

Pk+1f = M(k-1:0)P0MT(k-1:0)

Page 16: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Special cases - continued

6. No Dynamics Mk =I, Wk ≡ 0, Qk ≡ 0

xk+1 = xk = x

zk = Hkx + vk

xkf = with = E(x0)

Pkf = with = P0

=>

Same as (17.2.11) – (17.2.12) Static case

Page 17: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Special cases

7. When observations are perfect Rk ≡ 0

=> Kk = PkfHk

T[HkPkfHk

T + Rk]-1

= PkfHk

T[HkPkfHk

T]-1

Hk: m x n, Pkf: n x n, Hk

T: n x m

=> [HkPkfHk

T]-1: m x m

Recall:

From (27.2.19):

Page 18: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Special cases

∴ ( I- KkHk ) = ( I – KkHk )2, idempotent Fact: Idempotent matrices are singular => Rank of ( I- KkHk ) ≤ n – 1

∴ Rank( ) ≤ min {Rank( I- KkHk ), Rank( Pkf )} ≤ n – 1

∴ Rank of ≤ n – 1 ∴ When Rk is small, this will cause computational

instability

Page 19: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Special cases

8. Residual Checking rk = zk – Hkxk

f = innovation

= xkf + Kkrk

rk = zk –Hkxkf

= Hkxk + vk – Hkxkf

= Hk(xk – xkf) + vk

= Hkekf + vk

∴ COV(rk) = HkPkfHk

T + Rk

∴ By computing rk and its covariance, we can check if the filter is working O.K.

Page 20: Chapter 27:  Linear Filtering - Part I: Kalman Filter

10. Computational Cost

Page 21: Chapter 27:  Linear Filtering - Part I: Kalman Filter
Page 22: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Example 27.2.1 Scalar Dynamics with No Observation a > 0, wk ~ N( 0, q), x0 ~ N(m0, P0)

xk = axk-1 + wk

E(xk) = akE(x0) = akm0

Pk = Var(xk) = Var(axk-1 +wk) = a2Pk-1 + q

∴ Pk = a2kP0 + q[(a2k -1)/(a2 – 1)]

Page 23: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Scalar dynamics

Note: For a given m0, P0, q, the behavior of the moments depends on a

1. 0 < a < 1 lim E(xk) 0, lim Pk = q/(1-a2)

2. 1 < a < ∞ lim E(xk) = 0, lim Pk = ∞

3. a = 1 xk = x0 + ∑wk

E(xk) = m0

Pk = P0 + kq

Page 24: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Example 27.2.2 Kalman Filtering xk+1 = axk + wk+1, wk+1~(0,q)

zk = hxk + vk, vk~N(0,r)

xk+1f = a

Pk+1f = a2 + q

= xkf + Kk[zk – hxk

f]

Kk = Pkfh[h2Pk

f + r]-1 = hr-1

= Pkf – (Pk

f)2h2[h2Pkf + r]-1 = Pk

fr[h2Pkf+r]-1

Page 25: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Recurrences: Analysis of Stability HW. 1 xk+1

f = a(1-Kkh)xkf + aKkzk

ek+1

f = a(1-Kkh)ekf + aKkvk + wk+1

Pk+1

f = a2 +q

= Pkfr(h2Pk

f+r)-1

Page 26: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Example continued

Pk+1f = a2Pk

fr / (h2Pkf+r) + q

Pk+1f / r = a2Pk

f / (h2Pkf + r) + q/r

= a2(Pkf/r) / [h2(Pk

f/r) + 1] + q/r

Pk+1 = a2Pk/(h2Pk+1) + α α = q/r (ratio) Pk = Pk

f/r

Riccati equation ( First-order, scalar, nonlinear)

Page 27: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Asymptotic Properties

Let h = 1 Pk+1 = a2Pk/(Pk+1) + α

Let δk = Pk+1 – Pk

= a2Pk/(Pk+1) – Pk + α

= [-Pk2 + Pk(a2- 1 + α) + α]/(Pk+1)

∴ δk = g(Pk)/(Pk+1) where g(Pk) = -Pk

2 + Pk(a2 + α - 1) + α

Page 28: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Example continued

When Pk+1 = Pk, => δk = 0 => equilibrium

=> δk = 0 if g(Pk) = 0

-Pk2 + Pk(a2 + α – 1) + α = 0

-Pk2 + β Pk + α = 0

Evaluate the derivative of g(.) at P* and P*

g’(Pk) = -2Pk + β

β

Page 29: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Example continued

∴ P* is an attractor – stable

P* is a repellor – unstable Thus, => P* = a2P*/(P*+1) + α

Page 30: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Rate of Convergence

Let yk = pk – p* , pk+1 = a2pk/(pk+1) + α yk+1 = pk+1 – p*

= [a2pk/(pk+1) + α] - [a2p*/(p*+1) + α]

= a2pk/(pk+1) - a2p*/(p*+1)

= a2(pk – p*)/[(1+pk)(1+p*)]

= a2yk/[(1+pk)(1+p*)] ∴ 1/yk+1 = [(1+pk)(1+p*)]/ a2yk

= [(1+yk+p*)(1+p*)] / a2yk

= [(1+p*)/a]2/yk + (1+p*)/a2

Page 31: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Rate convergence - continued zk = 1/yk

=> zk+1 = czk + b

where c = [(1+p*)/a]2 and b = (1+p*)/a2

Iterating:

when c> 1 (ie) c = [(1+p*)/a]2 >1 When this is true, yk 0 at exp. rate

Page 32: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Rate of convergence - continued From (27.2.38) h = 1

Analysis Covariance Converges

Page 33: Chapter 27:  Linear Filtering - Part I: Kalman Filter

Stability of the Filter

h = 1 ek+1

f = a(1-Kkh)ekf + aKkvk + wk+1

-- homo part Kk = pk

f/(pkf+r) = pk/(pk+1)

1 – Kk = 1/(pk+1) ∴