slideshare

36
Introduction to Kalman Filter

Upload: mohit-yadav

Post on 07-May-2015

254 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Slideshare

Introduction to Kalman Filter

Page 2: Slideshare

2

Overview

• The Problem – Why do we need Kalman Filter?

• What is a Kalman Filter?• Conceptual Overview• Kalman Filter – as linear dynamic system• Object Tracking using Kalman Filter

Page 3: Slideshare

3

The Problem

• System state cannot be measured directly• Need to estimate “optimally” from measurements

Measuring Devices Estimator

MeasurementError Sources

System State (desired but not known)

External Controls

Observed Measurements

Optimal Estimate of System State

SystemError Sources

System

Black Box

Page 4: Slideshare

Tracking Example• Noisy Measurement +

Filtering gives the final position.

• Localization + Filtering • why Filter ?• The process of finding the

“best estimate” from noisy data amounts to “filtering out” the noise.

• Kalman filter also doesn’t just clean up the data measurements, but also projects these measurements onto the state estimate.

Page 5: Slideshare

5

What is a Kalman Filter?• Recursive data processing algorithm• Generates optimal estimate of desired quantities

given the set of measurements• Optimal?– For linear system and white Gaussian errors, Kalman filter

is “best” estimate based on all previous measurements– For non-linear system optimality is ‘qualified’

• Recursive?– Doesn’t need to store all previous measurements and

reprocess all data each time step

Page 6: Slideshare

Kalman Filter Assumptions• Linearity and Gaussian White Noise

1. Because they are adequate enough to modal real time systems.2. Can be easily manipulated because we need to calculate only two

moments.3. Tractable mathematics due to linear property. 4. Calculation is very much simplified which is very important for online

algorithms.5. Practical justification – central limit theorem

• Why we are adding Noise ?1. To model disturbances in the system which are intractable.2. To model measurement errors in the system.3. For accurate estimation of values of a time-dependent process itself

we must include future noise.

Page 7: Slideshare

7

Kalman Filter

What if the noise is NOT Gaussian?Given only the mean and standard deviation of noise, the Kalman filter is the best linear estimator. Non-linear estimators may be better.

Why is Kalman Filtering so popular?· Good results in practice due to optimality and structure.· Convenient form for online real time processing.· Easy to formulate and implement given a basic understanding.· Measurement equations need not be inverted.

Page 8: Slideshare

Optimality of Kalman Filter

Page 9: Slideshare

Optimal Estimation

Page 10: Slideshare

10

Conceptual Overview

• Lost on the 1-dimensional line• Position – y(t)• Assume Gaussian distributed measurements

y

Page 11: Slideshare

11

Conceptual Overview

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

• Sextant Measurement at t1: Mean = z1 and Variance = z1

• Optimal estimate of position is: ŷ(t1) = z1

• Variance of error in estimate: 2x (t1) = 2

z1

• Boat in same position at time t2 - Predicted position is z1

Page 12: Slideshare

12

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• So we have the prediction ŷ-(t2)• GPS Measurement at t2: Mean = z2 and Variance = z2

• Need to correct the prediction due to measurement to get ŷ(t2)• Closer to more trusted measurement – linear interpolation?

prediction ŷ-(t2)measurement z(t2)

Page 13: Slideshare

Conceptual Overview

Page 14: Slideshare

14

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• Corrected mean is the new optimal estimate of position• New variance is smaller than either of the previous two variances

measurement z(t2)

corrected optimal estimate ŷ(t2)

prediction ŷ-(t2)

• Amount of Spread is the measurement of Uncertainty.• We will try to reduce this uncertainty in a way trying to reduce the variance.

Page 15: Slideshare

15

Conceptual Overview

• Lessons so far:Make prediction based on previous data - ŷ-, -

Take measurement – zk, z

Optimal estimate (ŷ) = Prediction + (Kalman Gain) * (Measurement - Prediction)

Variance of estimate = Variance of prediction * (1 – Kalman Gain)

Page 16: Slideshare

16

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• At time t3, boat moves with velocity dy/dt=u• Naïve approach: Shift probability to the right to predict• This would work if we knew the velocity exactly (perfect model)

ŷ(t2)Naïve Prediction ŷ-

(t3)

Page 17: Slideshare

17

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• Better to assume imperfect model by adding Gaussian noise• dy/dt = u + w• Distribution for prediction moves and spreads out

ŷ(t2)

Naïve Prediction ŷ-

(t3)

Prediction ŷ-(t3)

Page 18: Slideshare

18

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• Now we take a measurement at t3

• Need to once again correct the prediction• Same as before

Prediction ŷ-(t3)

Measurement z(t3)

Corrected optimal estimate ŷ(t3)

Page 19: Slideshare

19

Conceptual Overview

• Lessons learnt from conceptual overview:– Initial conditions (ŷk-1 and k-1)– Prediction (ŷ-

k , -k)

• Use initial conditions and model (eg. constant velocity) to make prediction

– Measurement (zk)• Take measurement

– Correction (ŷk , k)• Use measurement to correct prediction by ‘blending’ prediction

and residual – always a case of merging only two Gaussians• Optimal estimate with smaller variance

Page 20: Slideshare

Kalman Filter - as linear discrete-time dynamical system

•Dynamic System – State of the system is time-variant.•System is described by “state vector” – which is minimal set of data that is sufficient to uniquely describe the dynamical behaviour of the system.•We keep on updating this system i.e. state of the system or state vector based on observable data.

Page 21: Slideshare

KF – as linear discrete -time system System Description

• Process Equation :- • xk+1 = F(k+1,k) * xk + wk ;• Where F(k+1,k) is the transition matrix taking the state xk from time k to time k+1.• Process noise wk is assumed to be AWG with zero mean and with covariance matrix defined by :- E [wn wkT] = Qk ; for n=k and zero otherwise.

Page 22: Slideshare

• Measurement Equation :- • yk = Hk * xk + vk ;• Where yk is observable data at time k and Hk is the measurment matrix.• Process noise vk is assumed to be AWG with zero mean and with covariance matrix defined by :- E [vn vkT] = Rk ; for n=k and zero otherwise.

KF – as linear discrete -time system Measurement

Page 23: Slideshare

23

Theoretical Basis• Process to be estimated:

yk = Ayk-1 + wk-1

zk = Hyk + vk

Process Noise (w) with covariance Q

Measurement Noise (v) with covariance R

• Kalman FilterPredicted: ŷ-

k is estimate based on measurements at previous time-steps

ŷk = ŷ-k + K(zk - H ŷ-

k )

Corrected: ŷk has additional information – the measurement at time k

K = P-kHT(HP-

kHT + R)-1

ŷ-k = Ayk-1 + Buk

P-k = APk-1AT + Q

Pk = (I - KH)P-k

Page 24: Slideshare

KF- linear dynamical time variant system

Page 25: Slideshare

25

Theoretical Basis

ŷ-k = Ayk-1 + Buk

P-k = APk-1AT + Q

Prediction (Time Update)

(1) Project the state ahead

(2) Project the error covariance ahead

Correction (Measurement Update)

(1) Compute the Kalman Gain

(2) Update estimate with measurement zk

(3) Update Error Covariance

ŷk = ŷ-k + K(zk - H ŷ-

k )

K = P-kHT(HP-

kHT + R)-1

Pk = (I - KH)P-k

Page 26: Slideshare

KF – Review the complete process

Page 27: Slideshare

27

Blending Factor

• If we are sure about measurements:– Measurement error covariance (R) decreases to zero– K decreases and weights residual more heavily than prediction

• If we are sure about prediction– Prediction error covariance P-

k decreases to zero– K increases and weights prediction more heavily than residual

Page 28: Slideshare

Kalman Filter – as Gaussian density propagation process

• Random component wk leads to spreading of the density function.• F(k+1,k)*XK causes drift bodily.• Effect of external observation y is to superimpose a reactive effect on the diffusion in which the density tends to peak in the vicinity of observations.

Page 29: Slideshare

• Object is segmented using image processing techniques.• Kalman filter is used to make more efficient the localization method of the object.• Steps Involved in vision tracking are :- • Step 1:- Initialization ( k = 0 ) Find out the object position and initially a big error tolerance(P0 = 1).• Step 2:- Prediction( k > 0 ) predicting the relative position of the object ^xk_ which is considered using as search center to find the object.• Step 3:- Correction( k > 0) its measurement to carry out the state correction using Kalman Filter finding this way ^xk .

Object Tracking using Kalman Filter

Page 30: Slideshare

• Advantage is to tolerate small occlusions.• Whenever object is occluded we will skip the measurement correction and keeps on predicting till we get object again into localization.

Object Tracking using Kalman Filter

Page 31: Slideshare

Object Tracking using Kalman Filter for Non Linear Trajectory

• Extended Kalman Filter - modelling more dynamical system using unconstrained Brownian Motion

Page 32: Slideshare

Object Tracking using Kalman Filter

Page 33: Slideshare

Mean Shift Optimal Prediction and Kalman Filterfor Object Tracking

Page 34: Slideshare

Mean Shift Optimal Prediction and Kalman Filterfor Object Tracking

•Colour Based Similarity measurement – Bhattacharyya distance • Target Localization • Distance Maximization • Kalman Prediction

Page 35: Slideshare

Object Tracking using an adaptive Kalman Filter combined with Mean Shift

• Occlusion detection based on value of Bhattacharyya coefficient.• Based on trade-offs between weight to measurement and residual error matrix Kalman Parameters are estimated.

Page 36: Slideshare

Thank You