gaussian (or normal) distribution - the new age of discoverysleonard/review.pdfgaussian white noise...

24
600.436/600.636 G.D. Hager S. Leonard Gaussian (or Normal) Distribution -s s m Univariate Multivariate ~ , 2 = 1 2 โˆ’ 1 2 (โˆ’) 2 2 ~ , ฮฃ = 1 (2) /2 ฮฃ 1/2 โˆ’ 1 2 โˆ’ ฮฃ โˆ’1 (โˆ’)

Upload: hoangdung

Post on 23-Apr-2018

225 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Gaussian (or Normal) Distribution

-s s

m

Univariate

Multivariate

๐‘ ๐‘ฅ ~๐‘ ๐œ‡, ๐œŽ2

๐‘ ๐‘ฅ =1

2๐œ‹๐œŽ๐‘’โˆ’12(๐‘ฅโˆ’๐œ‡)2

๐œŽ2

๐‘ ๐’™ ~๐‘ ๐, ฮฃ

๐‘ ๐’™ =1

(2๐œ‹)๐‘‘/2 ฮฃ 1/2๐‘’โˆ’

12๐’™โˆ’๐ ๐‘‡ฮฃโˆ’1(๐’™โˆ’๐)

Page 2: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Properties of Gaussians

โ€ข We stay in the โ€œGaussian worldโ€ as long as we start with

Gaussians and perform only linear transformations.

โ€ข Same holds for multivariate Gaussians

๐‘‹~๐‘ ๐œ‡, ๐œŽ2

๐‘Œ = ๐‘Ž๐‘‹ + ๐‘โŸน ๐‘Œ~๐‘(๐‘Ž๐œ‡ + ๐‘, ๐‘Ž2๐œŽ2)

๐‘‹1~๐‘ ๐œ‡1, ๐œŽ1

๐‘‹2~๐‘ ๐œ‡2, ๐œŽ2โŸน ๐‘‹1 + ๐‘‹2~๐‘(๐œ‡1 + ๐œ‡2, ๐œŽ1 + ๐œŽ2 )

2

22 2

Page 3: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Covariance, covariance, covariance

โ€ข Know whatโ€™s a covariance

matrix

โ€“ What it does, what it means,

why it matters, how you can

compute one, how you can

visualize them

๐’™ =1

๐‘

๐‘–=0

๐‘

๐’™๐‘–

๐’†๐‘– = ๐’™๐‘– โˆ’ ๐’™

ฮฃ = E[๐’†๐‘–๐’†๐‘–๐‘‡]

Page 4: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Kalman Filter

โ€ข Recursive solution for discrete linear filtering problems

โ€“ A state x Rn

โ€“ A measurement z Rm

โ€“ Discrete (i.e. for time t = 1, 2, 3, โ€ฆ )

โ€“ Recursive process (i.e. xt = f ( xt-1 ) )

โ€“ Linear system (i.e xt = A xt-1 )

โ€ข The system is defined by:

1) linear process model

xt = A xt-1 + B ut-1 + wt-1

2) linear measurement model

zt = H xt + vt

control

Input

(optional)

Gaussian

white

noise

state

transitionGaussian

white

noise

observation

model

How a state transitions into another state How a state relates to a measurement

Page 5: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Kalman Filter

โ€ข Recipe:

1. Start with an initial guess

๐’™0, ฮฃ02. Compute prior (prediction)

๐’™๐‘กโ€ฒ = ๐ด ๐’™๐‘กโˆ’1 + ๐ต๐’–๐‘กโˆ’1

ฮฃ๐‘กโ€ฒ = ๐ดฮฃ๐‘กโˆ’1๐ด

๐‘‡ + ๐‘„

3. Compute posterior (correction)

๐พ๐‘ก = ฮฃ๐‘กโ€ฒ๐ป๐‘‡ ๐ปฮฃ๐‘ก

โ€ฒ๐ป๐‘‡ + ๐‘… โˆ’1

๐’™๐‘ก = ๐’™๐‘กโ€ฒ + ๐พ๐‘ก ๐’›๐‘ก โˆ’ ๐ป ๐’™๐‘ก

โ€ฒ

ฮฃ๐‘ก = 1 โˆ’ ๐พ๐‘ก๐ป ฮฃ๐‘กโ€ฒ

Time update

(predict)

Measurement

update

(correct)RE

PE

AT

Minimizes the sum of the expected

errors: ๐‘’๐‘–2

Page 6: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Initial

๐’™0, ฮฃ0

16-735, Howie Choset with slides

from G.D. Hager and Z. Dodds

Predict ๐’™1โ€ฒ from ๐’™0 and ๐’–0

๐’™1โ€ฒ = ๐ด ๐’™0 + ๐ต๐’–0

ฮฃ1โ€ฒ = ๐ดฮฃ0๐ด

๐‘‡ + ๐‘„

Correction using ๐’›1๐พ1 = ฮฃ1

โ€ฒ๐ป๐‘‡ ๐ปฮฃ1โ€ฒ๐ป๐‘‡ + ๐‘… โˆ’1

๐’™1 = ๐’™1โ€ฒ + ๐พ1 ๐’›1 โˆ’ ๐ป ๐’™1

โ€ฒ

ฮฃ1 = 1 โˆ’ ๐พ1๐ป ฮฃ1โ€ฒ

Predict ๐’™2โ€ฒ from ๐’™1 and ๐’–1

๐’™2โ€ฒ = ๐ด ๐’™1 + ๐ต๐’–1

ฮฃ2โ€ฒ = ๐ดฮฃ1๐ด

๐‘‡ + ๐‘„

Page 7: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Kalman Filter Limitations

โ€ข Assumptions:

โ€“ Linear process model ๐’™๐‘ก+1 = ๐ด๐’™๐‘ก + ๐ต๐’–๐‘ก +๐’˜๐‘ก

โ€“ Linear observation model ๐’›๐‘ก = ๐ป๐’™๐‘ก + ๐’—๐‘กโ€“ White Gaussian noise ๐‘(0, ฮฃ )

โ€ข What can we do if system is not linear?

โ€“ Non-linear state dynamics ๐’™๐‘ก+1 = ๐‘“ ๐’™๐‘ก , ๐’–๐‘ก , ๐’˜๐‘ก

โ€“ Non-linear observations ๐’›๐‘ก = โ„Ž ๐’™๐‘ก , ๐’—๐‘ก

Page 8: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

โ€ข Extended Kalman Filter Recipe:

โ€“ Given

โ€“ Prediction

โ€“ Measurement correction

Extended Kalman Filter

โ€ข Kalman Filter Recipe:

โ€“ Given

โ€“ Prediction

โ€“ Measurement correction

Page 9: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

EKF for Range-Bearing Localization

โ€ข State ๐’”๐‘ก = ๐‘ฅ๐‘ก ๐‘ฆ๐‘ก ๐œƒ๐‘ก๐‘‡ 2D position and orientation

โ€ข Input ๐’–๐‘ก = ๐‘ฃ๐‘ก ๐œ”๐‘ก๐‘‡ linear and angular velocity

โ€ข Process model

๐’‡ ๐’”๐‘ก , ๐’–๐‘ก , ๐’˜๐‘ก =

๐‘ฅ๐‘กโˆ’1 + (โˆ†๐‘ก)๐‘ฃ๐‘กโˆ’1cos(๐œƒ๐‘กโˆ’1)๐‘ฆ๐‘กโˆ’1 + (โˆ†๐‘ก)๐‘ฃ๐‘กโˆ’1sin(๐œƒ๐‘กโˆ’1)

๐œƒ๐‘กโˆ’1 + (โˆ†๐‘ก)๐œ”๐‘กโˆ’1

+

๐‘ค๐‘ฅ๐‘ก๐‘ค๐‘ฆ

๐‘ค๐œƒ๐‘ก

โ€ข Given a map, the robot sees N landmarks with coordinates

๐’1 = ๐‘ฅ๐‘™1 ๐‘ฆ๐‘™1 ๐‘‡ , โ‹ฏ , ๐’๐‘ = ๐‘ฅ๐‘™๐‘ ๐‘ฆ๐‘™๐‘ ๐‘‡

The observation model is

๐’›๐‘ก =๐’‰1(๐’”๐‘ก, ๐’—1)

โ‹ฎ๐’‰๐‘(๐’”๐‘ก, ๐’—๐‘)

๐’‰๐‘– ๐’”๐‘ก, ๐’—๐‘ก =๐‘ฅ๐‘ก โˆ’ ๐‘ฅ๐‘™๐‘–

2+ ๐‘ฆ๐‘ก โˆ’ ๐‘ฆ๐‘™๐‘–

2

tanโˆ’1๐‘ฆ๐‘กโˆ’๐‘ฆ๐‘™๐‘–๐‘ฅ๐‘กโˆ’๐‘ฅ๐‘™๐‘–

โˆ’ ๐œƒ๐‘ก

+๐‘ฃ๐‘Ÿ๐‘ฃ๐‘

x

xtyt

y lN

l1

r1

b1

Page 10: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Kalman Filters and SLAMโ€ข Localization: state is the location of the robot

โ€ข Mapping: state is the location of 2D landmarks

โ€ข SLAM: state combines both

โ€ข If the state is

then we can write a linear observation system

โ€“ note that if we donโ€™t have some fixed landmarks, our system is unobservable(we canโ€™t fully determine all unknown quantities)

โ€ข Covariance S is represented by http://ais.informatik.uni-freiburg.de

Page 11: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

โ€ข State ๐’”๐‘ก = ๐‘ฅ๐‘ก ๐‘ฆ๐‘ก ๐œƒ๐‘ก ๐’1๐‘‡ โ‹ฏ ๐’๐‘

๐‘‡ position/orientation of

robot and landmarks coordinates

โ€ข Input ๐’–๐‘ก = ๐‘ฃ๐‘ก ๐œ”๐‘ก๐‘‡ forward and angular velocity

โ€ข The process model for localization is

๐’”๐‘กโ€ฒ =

๐‘ฅ๐‘กโˆ’1๐‘ฆ๐‘กโˆ’1๐œƒ๐‘กโˆ’1

+

โˆ†๐‘ก๐‘ฃ๐‘กโˆ’1cos(๐œƒ๐‘กโˆ’1)โˆ†๐‘ก๐‘ฃ๐‘กโˆ’1sin(๐œƒ๐‘กโˆ’1)

โˆ†๐‘ก๐œ”๐‘กโˆ’1

This model is augmented for 2N+3 dimensions to

accommodate landmarks. This results in the process equation๐‘ฅ๐‘กโ€ฒ

๐‘ฆ๐‘กโ€ฒ

๐œƒ๐‘กโ€ฒ

๐’1๐‘กโ€ฒ

โ‹ฎ๐’๐‘๐‘กโ€ฒ

=

๐‘ฅ๐‘กโˆ’1๐‘ฆ๐‘กโˆ’1๐œƒ๐‘กโˆ’1

๐’1๐‘กโˆ’1โ‹ฎ

๐’๐‘๐‘กโˆ’1

+

1 0 00 1 00 0 10 0 0โ‹ฎ โ‹ฎ โ‹ฎ0 0 0

โˆ†๐‘ก๐‘ฃ๐‘กโˆ’1cos(๐œƒ๐‘กโˆ’1)โˆ†๐‘ก๐‘ฃ๐‘กโˆ’1sin(๐œƒ๐‘กโˆ’1)

โˆ†๐‘ก๐œ”๐‘กโˆ’1

EKF Range Bearing SLAM

Prior State Estimation

Landmarks donโ€™t depend

on external input

Page 12: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

EKF Range Bearing SLAM

Prior Covariance UpdateWe assume static landmarks. Therefore, the function

f(s,u,w) only affects the robotโ€™s location and not the

landmarks.

ฮฃ๐‘กโ€ฒ = ๐ด๐‘กฮฃ๐‘กโˆ’1๐ด๐‘ก

๐‘‡ +๐‘Š๐‘ก๐‘„๐‘Š๐‘ก๐‘‡

Jacobian of the

robot motion

The motion of the robot does not

affect the coordinates of

the landmarks

Jacobian of the

process model

2Nx2N identity

๐ด =

๐œ•๐‘ฅ

๐œ•๐‘ฅ

๐œ•๐‘ฅ

๐œ•๐‘ฆ

๐œ•๐‘ฅ

๐œ•๐œƒ๐œ•๐‘ฆ

๐œ•๐‘ฅ

๐œ•๐‘ฆ

๐œ•๐‘ฆ

๐œ•๐‘ฆ

๐œ•๐œƒ๐œ•๐œƒ

๐œ•๐‘ฅ

๐œ•๐œƒ

๐œ•๐‘ฆ

๐œ•๐œƒ

๐œ•๐œƒ

0

0 ๐ผ

Page 13: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Bayes Filter

โ€ข Given a sequence of measurements z1, โ€ฆ, zk

โ€ข Given a sequence of commands u0, โ€ฆ, uk-1

โ€ข Given a sensor model P(zk | xk)

โ€ข Given a dynamic model P(xk | xk-1)

โ€ข Given a prior probability P(x0)

โ€ข Find P(xk | z1:k, u0:k-1 )

x0xk-

2

xk-

1xk

zk-2 zk-1 zk

u0 uk-2 uk-1

Page 14: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Bayesian Localization

โ€ข Recall Bayes Theorem:

๐‘ƒ ๐‘ฅ ๐‘ง) =๐‘ƒ ๐‘ง ๐‘ฅ ๐‘ƒ(๐‘ฅ)

๐‘ƒ(๐‘ง)

โ€ข Also remember conditional independence

โ€ข Think of x as the state of the robot and z as the data we know

๐‘ƒ(๐’™๐‘˜|๐’–0:๐‘˜โˆ’1, ๐’›1:๐‘˜) Posterior Probability Distribution

๐‘ƒ ๐’™๐‘˜ ๐’–0:๐‘˜โˆ’1, ๐’›1:๐‘˜ =๐‘ƒ ๐’›๐‘˜ ๐’™๐‘˜ , ๐’–0:๐‘˜โˆ’1, ๐’›1:๐‘˜โˆ’1 ๐‘ƒ(๐’™๐‘˜|๐’–0:๐‘˜โˆ’1, ๐’›1:๐‘˜โˆ’1)

๐‘ƒ(๐’›๐‘˜|๐’–0:๐‘˜โˆ’1, ๐’›1:๐‘˜โˆ’1)

= ๐œ‚๐‘˜๐‘ƒ(๐’›๐‘˜|๐’™๐‘˜)

๐’™๐‘˜โˆ’1

๐‘ƒ ๐’™๐‘˜ ๐’–๐‘˜โˆ’1, ๐’™๐‘˜โˆ’1 ๐‘ƒ ๐’™๐‘˜โˆ’1 ๐’–0:๐‘˜โˆ’2, ๐’›1:๐‘˜โˆ’1 ๐‘‘๐’™๐‘˜โˆ’1

observation recursionstate prediction

Page 15: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. HagerS. Leonard

Bayes Filter Recap

๐‘ƒ(๐’™๐‘˜โˆ’1 = A)

๐‘ƒ(๐’™๐‘˜โˆ’1 = B)

๐‘ƒ(๐’™๐‘˜โˆ’1 = Z)

โ€ฆ

๐‘ƒ(๐’™๐‘˜ = A |

๐’™๐‘˜โˆ’1๐’–๐‘˜โˆ’1)

๐‘ƒ(๐’™๐‘˜ = B|

๐’™๐‘˜โˆ’1๐’–๐‘˜โˆ’1)

๐‘ƒ(๐’™๐‘˜ = Z |

๐’™๐‘˜โˆ’1๐’–๐‘˜โˆ’1)

Apply command๐’–๐‘˜โˆ’1

โ€ฆ

Obtain z k and apply๐‘ƒ(๐’›๐‘˜|๐’™๐‘˜)๐‘ƒ(๐’™๐‘˜ = A |

๐’–๐‘˜โˆ’1๐’›๐‘˜)

๐‘ƒ(๐’™๐‘˜ = B |

๐’–๐‘˜โˆ’1๐’›๐‘˜)

๐‘ƒ(๐’™๐‘˜ = Z |

๐’–๐‘˜โˆ’1๐’›๐‘˜)

โ€ฆ

๐‘ƒ ๐’™๐‘˜ ๐’–0:๐‘˜โˆ’1, ๐’›1:๐‘˜ ๐‘ƒ ๐’™๐‘˜โˆ’1 ๐’–0:๐‘˜โˆ’2, ๐’›1:๐‘˜โˆ’1๐‘ƒ ๐’™๐‘˜ ๐’–๐‘˜โˆ’1, ๐’™๐‘˜โˆ’1

Page 16: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Predict Motion (Prior Distribution)

โ€ข Suppose we have ๐‘ƒ ๐’™๐‘˜โ€ข We have ๐‘ƒ(๐’™๐‘˜+1|๐’™๐‘˜ , ๐’–๐‘˜)

โ€ข Put together

๐‘ƒ ๐’™๐‘˜+1 = ๐’™๐‘˜

๐‘ƒ(๐’™๐‘˜+1|๐’™๐‘˜ , ๐’–๐‘˜)๐‘ƒ ๐’™๐‘˜ ๐‘‘๐’™๐‘˜

What is the probability distribution for xk+1 given the command uk and all

the previous states xk?

Page 17: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

System Model

State space X = { 1, 2, 3, 4}

P(xk+1 | xk,

uk)

Xk+1=

1

Xk+1=

2

Xk+1=

3

Xk+1=

4

Xk=1 0.25 0.5 0.25 0

Xk=2 0 0.25 0.5 0.25

Xk=3 0 0 0.25 0.75

Xk=4 0 0 0 1

xk

P(xk)

1 2 3 4

0.5

Prior probability

distribution ๐‘ƒ(๐‘ฅ๐‘˜)

Compute ๐‘ƒ ๐‘ฅ๐‘˜+1 = ๐‘ฅ๐‘˜๐‘ƒ(๐‘ฅ๐‘˜+1|๐‘ฅ๐‘˜ , ๐‘ข๐‘˜)๐‘‘๐‘ฅ๐‘˜

Transition matrix: The probability P(j|i) of

moving

from i to j is given by Pi,j. Each row must sum

to 1.

Page 18: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Observation Model

โ€ข How likely is the measurement zk given a state xk?

โ€ข The measurement zk is what we get. So we have to stick by it

โ€ข However, not all states will likely produce the measurement

โ€ข Weโ€™re not interested in finding the most likely state. We want the whole distribution

X

X

๐‘ƒ(๐‘ง๐‘˜|๐‘ฅ๐‘˜)

zk

xk

Doesnโ€™t have to be Gaussian

Page 19: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Observation Model

Beam Model

โ€ข Mixing all these cases together we get

๐‘ƒ ๐‘ง๐‘˜ ๐’™๐‘˜ , ๐‘š =

๐‘คhit๐‘คshort๐‘คmax๐‘คrand

๐‘‡ ๐‘ƒhit(๐‘ง๐‘˜|๐’™๐‘˜, ๐‘š)

๐‘ƒshort(๐‘ง๐‘˜|๐’™๐‘˜ , ๐‘š)

๐‘ƒmax(๐‘ง๐‘˜|๐’™๐‘˜, ๐‘š)๐‘ƒrand(๐‘ง๐‘˜|๐’™๐‘˜, ๐‘š)

where the weights are parameters

๐‘คhit + ๐‘คshort + ๐‘คmax + ๐‘คrand = 1

Probabilistic Robotic

Page 20: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Likelihood Field Model

๐‘ƒhit ๐‘ง๐’Œ ๐’™๐‘˜ , ๐‘š = ๐œ€๐œŽโ„Ž๐‘–๐‘ก2 (๐‘‘2)

Probabilistic Robotic

Page 21: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. HagerS. Leonard

Discrete Bayes Filter Algorithm Algorithm Discrete_Bayes_filter( u0:k-1, y1:k, P(x0) )

1. P(x) = P(x0) (if you donโ€™t know: uniform distribution)

2. for i=1:k

3. for all states x X

4.

5. end for

6. h=0 Normalization constant

7. for all states x X

8.

9. h = h + P(x)

10. end for

11. for all states x X

12. P(x) = P(x) / h

13. end for

14. end for

Xx

xPxuxPxP )(),|()(

)()|()( xPxzPxP

Prediction given prior dist. and command

Update using measurement

Normalize to 1

Page 22: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Particle Filter

โ€ข Computing ๐‘ƒ(๐’›๐‘˜|๐’™๐‘˜ , ๐‘š) and ๐‘ƒ(๐’™๐‘˜+1|๐’™๐‘˜ , ๐’–๐‘˜) is not easy

โ€ข In practice, it is never directly computable

โ€ข Need to propagate an entire conditional distribution, not just

one state like we did with Kalman,

โ€ข Represent probability distribution by random samples

โ€ข Estimation of non-Gaussian, nonlinear processes

Page 23: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Particle Filter

Page 24: Gaussian (or Normal) Distribution - The New Age of Discoverysleonard/review.pdfGaussian white noise state transition Gaussian white noise observation model How a state transitions

600.436/600.636G.D. Hager

S. Leonard

Particle Filter Algorithm Algorithm Particle_filter( u0:k-1, y1:k, P(x0), set of N samples M = {xj, wj} )

1. for i=1:k

2. for j=1:N

3. compute a new state x by sampling according to P( x | ui-1, xj )

4. xj = x

5. end

6. h=0

7. for j=1:N

8. wj = P( zi | xj )

9. h += wj

10. end

11. for j=1:N

12. wj = wj / h

13. end

14. resample (M ) according to weights wj

15. end