lecture 12, graphslam

33

Upload: others

Post on 18-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

GraphSLAMGraphSLAM solves the full SLAM problem.

m2

m1

x0 x1

x3

x4

0 0 0Tx x

11 1 0 1 1 0, ,

Tx g u x R x g u x

x2

12 2 1 2 2 1, ,

Tx g u x R x g u x

13 3 2 3 3 2, ,

Tx g u x R x g u x

14 4 3 4 4 3, ,

Tx g u x R x g u x

11 1 1 1 1 1, ,

Tz h m x Q z h m x

14 1 4 4 1 4, ,

Tz h m x Q z h m x

12 2 2 2 2 2, ,

Tz h m x Q z h m x

13 2 3 2 2 3, ,

Tz h m x Q z h m x

1 10 0 0 1 1, , , ,

t t

TTTGraphSlam t t t t t t t c t t c t

t tJ x x x g u x R x g u x z h m x Q z h m x

The ultimate goal of full slam is:

From Bayesian Rule:

GraphSLAM Derivation

0: 1: 1: 1:| , ,t t t tp y z u c

0: 1: 1: 1: 0: 1: 1 1: 1: 0: 1: 1 1: 1:| , , | , , , | , ,t t t t t t t t t t t t tp y z u c p z y z u c p y z u c State predictionState correction

| ,t t tp z y cMarkov property Factor into and 0:ty 0: 1ty tx

1: 1 1: 1: 0: 1 1: 1 1: 1:| , , | , ,t t t t t t t tp x z u c p y z u c

1 0: 1 1: 1 1: 1 1: 1| , | , ,t t t t t t tp x x u p y z u c

Markov property

| ,t t tp z y c

0 1| , | ,t t t t t tt

p y p x x u p z y c 0 1| , | ,i i

t t t t t tt i

p x p x x u p z y c

Observation to landmark iat time t

Prediction at time t

Initial pose guess

GraphSLAM DerivationAssume a Gaussian motion model

Assume a Gaussian measurement model

Assume a Gaussian initialization

1,t t t tx g u x v

11 1 1 1

1| , ~ , , exp , ,2

ΝT

t t t t t t t t t t t t tp x x u g u x R x g u x R x g u x

,i it t t tz h y c w

11| , ~ , , exp , ,2

ΝTi i i i i

t t t t t t t t t t t t tp z y c h y c Q y h y c Q y h y c

0

0 0~ [0,0,0] , 0 0

0 0

Tp x N

GraphSlam Derivation

11 1

0: 1: 1: 1: 0 0 01

1exp , ,21| , , exp

2 1exp , ,2

Tt t t t t t t

Tt t t t

Tt i it t t t t t t

i

x g u x R x g u xp y z u c x x

y h y c Q y h y c

0: 1: 1: 1:

1 10 0 0 1 1

log | , ,

1 , , , ,2

t t t t

TTT i it t t t t t t t t t t t t t

t t i

p y z u c const

x x x g u x R x g u x y h y c Q y h y c

10 0 0 1 1

1

* 10

, ,

, ,

TTGraphSlam t t t t t t t

tTi i

t t t t t t tt i

t

J x x x g u x R x g u x

y h y c Q y h y c

y

The negative-log posterior

Putting them together

The objective of GraphSLAM

quadratic in the functions g() and h()

GraphSLAM DerivationLinearization

1 1 1 1, ,

, ,t t t t t t t

i it t t t t t

g u x g u G x

h y c h c H y

1st orderTaylor at 1t

1st orderTaylor at t

11 1

1 11 1 1 1 1

, ,

2 ,

Tt t t t t t t

T Tt t t t t t t t t t t t t t t

x g u x R x g u x

x G x R x G x x G x R g u G const

1 11: 1: 1: 1 12 ,t tT T

t t t t t t t t t t t t t

G Gx R G I x x R g u G const

I I

1

1 1

, ,

,

Ti i i it t t t t t t

T iT i T iT i i it t t t t t t t t t t t t

z h y c Q z h y c

y H Q H y y H Q z h c H y const

GraphSlam Derivation

* 10 ty :

1. Construct the information matrix and information vector

2. To recover y0:t

General Flow of GraphSLAM Algorithms

Initialize y0:t

Construct and

Solve

Repeat

y0:t 1

until convergence

Return y0:t

The Graph SLAM Algorithm Initialization Information matrix construction Solving

Sparse Cholesky factorization

Guassian Elimination in Information matrix General Gaussian Newton method

y0:t 1

1. GraphSLAM Initialize

2. Construct The Information Matrix and Information vector

(3M+3N)*(3M+3N)

(3M+3N)*1

2. Construct the Information Matrix

1. Initialization

0

2. Controls

1Gt

Rt

1 1 Gt

11 1

1,t t t t t

t

R g u GG

3. Measurements

1

1 ,

iT it t t

iT i i it t t t t t t

H Q H

H Q z h c H

[0,0,0]T

gt

Gt Fxgt FxT

Property of the information matrix

0: 0:,t tx x0: ,tx m

0:, tm x ,m mBlock diagonal

3. SolveIn each iteration we need to solve:

y0:t 1

Method 1. Solve by Sparse Cholesky factorization

LTL1 (LT )1L1

Method 2. Solve by Guassian Elimination in Information Matrix

Method 3. More general Guassian-Newton Method

y0:t 1

2. Guassian Elimination in Information Matrix

Remove m1

Remove m3

Only the robot poses are left

Marginal of a multivariate Guassian

Marginalization lemma: Let the probability distribution over the randomvectors x and y be a Gaussian represented in the information form:

If is invertible, the marginal p(x) is a Guassian whose informationrepresentation is

Theorem: Let the probability distribution over the random vectors x andy be a Gaussian represented in the information form:

The conditional p(x|y) is a Guassian whose information representation is

( , )p x y

xx xy

yx yy

and x

y

1xx xx xy yy yx

1x x xy yy y

yy

xx xy

yx yy

and x

y

xx x xy y

Information Matrix Reduce

According to the marginalization lemma

The matrix is block diagonal, making the reversion easy to calculate:m ,m

is the information matrix of landmark jLeading to linear combination form:

3. More General Guassian Newton MethodA more general graph model without differing the control and observation

General Guassian Newton MethodGiven an initial state , the quadratic objective can be approximated by 1st order Taylor approximation

⌣xi

1D Example

HomeworkProgramming to realize a GraphSLAMalgorithm• Compare the Cholesky factorization

method and the Gaussian elimination method

• Compare the information matrix method and the general Guassian Newton method.