assignment, project and presentation

Post on 01-Jan-2016

40 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Assignment, Project and Presentation. Mobile Robot Localization by using Particle Filter by Chong Wang, Chong Fu, and Guanghui Luo. Tracking, Mapping & Localizing the iRobot: An effort using only the iRobot Create’s sensors by PankajKumar Mendapara, Dibyendu Mukherjee, Ashirbani Saha - PowerPoint PPT Presentation

TRANSCRIPT

1

Assignment, Project and Presentation

• Mobile Robot Localization by using Particle Filter by Chong Wang,

Chong Fu, and Guanghui Luo.

• Tracking, Mapping & Localizing the iRobot: An effort using only the iRobot Create’s sensors by PankajKumar Mendapara, Dibyendu

Mukherjee, Ashirbani Saha

• Implementing the Monte Carlo Localization using Lego NXT by Yuefeng Wang , Sepideh Seifzadeh

• Little Border Protector Guard by Mohammad Raeesi Ahmad Soleimani

• Autonomous Moving IRobot Based on Vision System by Thanh Nguyen, Mohammed Golam Sarwer

• Soccer for One by Jonathan Vermette, Jonathan Farlam, Shawn DenHartogh

2

Assignment, Project and Presentation

• Thursday (Nov 20)• Yuefeng Wang , Sepideh Seifzadeh

• Tuesday (Nov 25)• Mohammad Raeesi Ahmad Soleimani• Chong Wang, Chong Fu, and Guanghui Luo.

• Thursday (Nov 27)• Jonathan Vermette, Jonathan Farlam, Shawn DenHartogh• PankajKumar Mendapara, Dibyendu Mukherjee, Ashirbani Saha

• Tuesday (Dec 2)• Thanh Nguyen, Mohammed Golam Sarwer

3

Mobile Robot Localization (ch. 7, 8)

•We are now back to the topic of localization after reviewing some necessary background.

•Mobile robot localization is the problem of determining the pose of a robot relative to a given map of the environment.

•Remember, in localization problem, the map is given, known, available.

• Is it hard? Not really, because,

4

Mobile Robot Localization

•Most localization algorithms are variants of Bayes filter algorithm.

•However, different representation of maps, sensor models, motion model, etc lead to different variant.

5

Mobile Robot Localization

•Solved already, the Bayes filter algorithm. How?

•The straightforward application of Bayes filters to the localization problem is called Markov localization.

•Here is the algorithm (abstract?)

6

Mobile Robot Localization

• Algorithm Bayes_filter ( )

• for all do

• endfor

• return

111 )(),|()(

tttttt dxxbelxuxpxbel

ttt zuxbel ,),( 1

)( txbel

)()|( )( tttt xbelxzpxbel

tx

7

Mobile Robot Localization• Algorithm Markov Locatlization ( )

• for all do

• endfor

• return

The Markov Localization algorithm addresses the global localization problem, the position tracking problem, and the kidnapped robot problem in static environment.

111 )(),,|()(

tttttt dxxbelmxuxpxbel

mzuxbel ttt ,,),( 1

)( txbel

)(),|( )( tttt xbelmxzpxbel

tx

8

Mobile Robot Localization

• Revisit Figure 7.5 to see how Markov localization algorithm in working.

• The algorithm Markov Localization is still very abstract. To put it in work (eg. your project), we need a lot of more background knowledge to realize motion model, sensor model, etc….

• We studied them (motion and sensor models) in Ch 5, and 6.

• Put everything together in Markov Localization algorithm.

9

Mobile Robot Localization

10

Mobile Robot Localization

11

Mobile Robot Localization

• We will discuss a few different implementations of Markov Localization algorithm based on:• Kalman filter• Discrete, grid representation• Particle filter (MCL)

12SA-1SA-1

Bayes Filter Implementations (1)

(Extended) Kalman Filter(Gaussian filters)

(Ch.3 and 7)Page 201-220 in Ch 7 and Page 40-64

Read and Compare them

13

•Prediction

•Correction

Bayes Filter Reminder

111 )(),|()( tttttt dxxbelxuxpxbel

)()|()( tttt xbelxzpxbel

14

Gaussians

2

2)(

2

1

2

2

1)(

:),(~)(

x

exp

Nxp

-

Univariate

)()(2

1

2/12/

1

)2(

1)(

:)(~)(

μxΣμx

Σx

Σμx

t

ep

,Νp

d

Multivariate

15

),(~),(~ 22

2

abaNYbaXY

NX

Properties of Gaussians

22

21

222

21

21

122

21

22

212222

2111 1

,~)()(),(~

),(~

NXpXpNX

NX

16

• We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.

),(~),(~ TAABANY

BAXY

NX

Multivariate Gaussians

12

11

221

11

21

221

222

111 1,~)()(

),(~

),(~

NXpXpNX

NX

17

Discrete Kalman Filter

tttttt uBxAx 1

tttt xCz

Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation

with a measurement

18

Components of a Kalman Filter

t

Matrix (nxn) that describes how the state evolves from t-1 to t without controls or noise.

tA

Matrix (nxl) that describes how the control ut changes the state from t-1 to t.tB

Matrix (kxn) that describes how to map the state xt to an observation zt.tC

t

Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Rt and Qt respectively.

19

Kalman Filter Updates in 1D

20

Kalman Filter Updates in 1D

1)(with )(

)()(

tTttt

Tttt

tttt

ttttttt QCCCK

CKI

CzKxbel

2,

2

2

22 with )1(

)()(

tobst

tt

ttt

tttttt K

K

zKxbel

21

Kalman Filter Updates in 1D

tTtttt

tttttt RAA

uBAxbel

1

1)(

2

,2221)(

tactttt

tttttt a

ubaxbel

22

Kalman Filter Updates

23

0000 ,;)( xNxbel

Linear Gaussian Systems: Initialization

• Initial belief is normally distributed:

24

• Dynamics are linear function of state and control plus additive noise:

tttttt uBxAx 1

Linear Gaussian Systems: Dynamics

ttttttttt RuBxAxNxuxp ,;),|( 11

1111

111

,;~,;~

)(),|()(

ttttttttt

tttttt

xNRuBxAxN

dxxbelxuxpxbel

25

Linear Gaussian Systems: Dynamics

tTtttt

tttttt

ttttT

tt

ttttttT

tttttt

ttttttttt

tttttt

RAA

uBAxbel

dxxx

uBxAxRuBxAxxbel

xNRuBxAxN

dxxbelxuxpxbel

1

1

1111111

11

1

1111

111

)(

)()(2

1exp

)()(2

1exp)(

,;~,;~

)(),|()(

26

• Observations are linear function of state plus additive noise:

tttt xCz

Linear Gaussian Systems: Observations

tttttt QxCzNxzp ,;)|(

ttttttt

tttt

xNQxCzN

xbelxzpxbel

,;~,;~

)()|()(

27

Linear Gaussian Systems: Observations

1

11

)(with )(

)()(

)()(2

1exp)()(

2

1exp)(

,;~,;~

)()|()(

tTttt

Tttt

tttt

ttttttt

tttT

ttttttT

tttt

ttttttt

tttt

QCCCKCKI

CzKxbel

xxxCzQxCzxbel

xNQxCzN

xbelxzpxbel

28

Kalman Filter Algorithm

1. Algorithm Kalman_filter( t-1, t-1, ut, zt):

2. Prediction:3. 4.

5. Correction:6. 7. 8.

9. Return t, t

ttttt uBA 1

tTtttt RAA 1

1)( tTttt

Tttt QCCCK

)( tttttt CzK

tttt CKI )(

29

The Prediction-Correction-Cycle

tTtttt

tttttt RAA

uBAxbel

1

1)(

2

,2221)(

tactttt

tttttt a

ubaxbel

Prediction

30

The Prediction-Correction-Cycle

1)(,)(

)()(

tTttt

Tttt

tttt

ttttttt QCCCK

CKI

CzKxbel

2,

2

2

22 ,)1(

)()(

tobst

tt

ttt

tttttt K

K

zKxbel

Correction

31

The Prediction-Correction-Cycle

1)(,)(

)()(

tTttt

Tttt

tttt

ttttttt QCCCK

CKI

CzKxbel

2,

2

2

22 ,)1(

)()(

tobst

tt

ttt

tttttt K

K

zKxbel

tTtttt

tttttt RAA

uBAxbel

1

1)(

2

,2221)(

tactttt

tttttt a

ubaxbel

Correction

Prediction

32

Kalman Filter Summary

•Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2)

•Optimal for linear Gaussian systems!

•Most robotics systems are nonlinear!

33

Nonlinear Dynamic Systems

•Most realistic robotic problems involve nonlinear functions

),( 1 ttt xugx

)( tt xhz

34

Linearity Assumption Revisited

35

Non-linear Function

36

EKF Linearization (1)

37

EKF Linearization (2)

38

EKF Linearization (3)

39

•Prediction:

•Correction:

EKF Linearization: First Order Taylor Series Expansion

)(),(),(

)(),(

),(),(

1111

111

111

ttttttt

ttt

tttttt

xGugxug

xx

ugugxug

)()()(

)()(

)()(

ttttt

ttt

ttt

xHhxh

xx

hhxh

40

EKF Algorithm

1. Extended_Kalman_filter( t-1, t-1, ut, zt):

2. Prediction:3. 4.

5. Correction:6. 7. 8.

9. Return t, t

),( 1 ttt ug

tTtttt RGG 1

1)( tTttt

Tttt QHHHK

))(( ttttt hzK

tttt HKI )(

1

1),(

t

ttt x

ugG

t

tt x

hH

)(

ttttt uBA 1

tTtttt RAA 1

1)( tTttt

Tttt QCCCK

)( tttttt CzK

tttt CKI )(

41

Landmark-based Localization

42

Landmark-based Localization

43

Landmark-based Localization• Our EKF localization algorithm assumes that the map is represented by a collection of features.

• All features are uniquely identifiable.

• At any point in time t, the robot gets to observe a vector of ranges and bearings to nearby features:

• The identity of a feature is expressed by set of correspondence variables, denoted • • The correspondence is known.

itc

44

Landmark-based Localization

45

Landmark-based Localization

• We have silently assumed the availability of an appropriate motion and measurement model, and have left unspecified a number of key variables in the EKF update.

• We will now discuss a concrete implementation of the EKF, for feature-based maps.

• Our feature-based maps consist of point landmarks.

• For such point landmarks, we will use the common measurement model discussed in Chapter 6.6 (feature based measurement model).

• We will also adopt the velocity motion model defined in Chapter 5.3 (velocity motion model).

46SA-1SA-1

a

47

•Prediction:

•Correction:

Reminder– The notations

)(),(),(

)(),(

),(),(

1111

111

111

ttttttt

ttt

tttttt

xGugxug

xx

ugugxug

)()()(

)()(

)()(

ttttt

ttt

ttt

xHhxh

xx

hhxh

),( 1 ttt xugx

)( tt xhz

48SA-1SA-1

a

49SA-1SA-1

a

50SA-1SA-1

a

51SA-1SA-1

a

52SA-1SA-1

EKF Localization with unknown Correspondence

pg. 215-218

53SA-1SA-1

top related