1 monte carlo localization naser nour’ashrafoddin advisor: dr. shiry computer department of...
Post on 21-Dec-2015
221 views
TRANSCRIPT
1
Monte Carlo Localization
Naser Nour’Ashrafoddin
Advisor: Dr. Shiry
Computer Department of AmirKabir University of Technology
2
Outline
• Introduction
• Localization
• Localization Methods– Markov Localization (review)– Monte Carlo Localization (MCL)
• References
3
Introduction
• Navigation: one of the most challenging competences required of a mobile robot.
– Perception: the robot must interpret its sensors to extract meaningful data.
– Localization: the robot must determine its position in the environment.
– Cognition: the robot must decide how to act to achieve its goals.
– Motion control: the robot must modulate its motor outputs to achieve the desired trajectory.
4
Localization
• Ingemar Cox (1991):
“Using sensory information to locate the robot in its environment is the most fundamental
problem to provide a mobile robot with autonomous capabilities.”
• how a robot can tell where it is on a map?
• Robot says: “Where am I?”.
?
5
Localization
• Localization problem comes in two flavors:
– Global localization: the problem involves a robot which is not told its initial position.
• Markov and Monte Carlo
– Local localization (position tracking): the robot knows its initial position and only has to
accommodate small errors in its odometry as it moves.
• Markov and Monte Carlo and Kalman
6
Markov Localization
• Central idea: represent the robot’s belief by a probability distribution over possible positions,
and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves.
• Markov Assumption: past and future data are independent if one knows the current state.
)|,...,(),...,,,|,...,( 211021 lLddPdddlLddP ttttttt
7
Markov Localization
• Applying probability theory to robot localization.
• Markov localization uses an explicit, discrete representation for the probability of all position in the
state space.
• This is usually done by representing the environment by a grid or a topological graph with a finite number
of possible states (positions).
• During each update, the probability for each state (element) of the entire space is updated.
8
Markov Localization
Bel(Lt= l) Is the probability (density) that the robot assigns to the possibility that its location at time t is l.
The belief is updated in response to two different types of events:•Robot motion (Odometry)•Sensor readings (Image)
9
Markov Localization
• Robot motion:– p(l | l’,a) specifying the probability that a measured
movement action a, when executed at l’, carries the robot to l.
• Sensor Readings:– Let s denote a sensor reading and p(s|l) the likelihood
of perceiving s given that the robot is at position l.
')'(),'|()( dllBelallplBel
)()|()( lBellsplBel
10
Markov Localization
• Markov Assumption?
• Approximation errors (fine grid or topological representation).
• Accurate approximation usually requires prohibitive amounts of computation and memory.
11
Monte Carlo
• MCL is a version of Markov localization that relies on sample-based representation and the sampling / importance re-sampling (SIR) algorithm for belief propagation.
• We are able to represent a density function by a set of N random samples (particles) drawn from it. From the samples we can always approximately reconstruct the density, e.g. using a histogram or a kernel based density estimation technique.
12
Monte Carlo
• For the MCL algorithm one does not need a closed-form description of the motion model.
• Instead, a sampling model of motion model
suffices.• A sampling model is a routine that accepts l’
and a as inputs and generates random poses l distributed according to motion model.
),'|( allp
13
Monte Carlo
• Central idea: represent the posterior belief Bel(l) by a set of N weighted, random samples
or particles S = {si | i = 1..N}.
• Samples in MCL are of the type:
<<x, y, θ>, p>
• <x, y, θ> denote the robot position and p is a numerical weighting factor and .1
1
N
n np
14
Monte Carlo
• Robot motion: generate N new samples. Each sample is generated by randomly drawing a sample from the previously computed sample set, with likelihood determined by their p-values. Let l’ denote the position of this sample. The new sample's l is then generated by generating a single, random sample from p(l|l’,a), using the action a as observed. The p-value of the new sample is N-1.
15
Monte Carlo
• Sensor Readings: re-weighting the sample set in a way that implement Bayes rule in Markov localization. More specifically, let <l,p> be a sample. Then
• Where s is a sensor measurement and α is a normalization constant that enforces
• Resample from the particles proportionally to their
new weights.
)|( lsPp
.11
N
n np
16
Monte Carlo
Current state Robot moves 1mRobot observes a
landmark
Re-sampling
17
Monte Carlo
• The probability density function is represented by samples randomly drawn from it.
• It is also able to represent multi-modal distributions, and thus localize the robot globally.
• Considerably reduces the amount of memory required and can integrate measurements at a higher rate.
• State is not discretized and the method is more accurate than the grid-based methods.
• Easy to implement.• A nice property of the MCL algorithm is that it can
universally approximate arbitrary probability distributions.
18
Monte Carlo
• Order : O(N). Convergence speed O(N-0.5).• In contrast to existing Kalman filtering based
techniques, it is able to represent multi-modal distributions and thus can globally localize a robot.
• It drastically reduces the amount of memory required compared to grid-based Markov localization and can integrate measurements at a considerably higher frequency.
• It is more accurate than Markov localization with a fixed cell size, as the state represented in the samples is not discretized.
19
Monte Carlo
• Kidnapping (recovery from failure): The kidnapped robot problem is often used to test a robot’s ability to
recover from catastrophic localization failures.• A well-localized robot is tele-ported to some other
place without being told.• This problem differs from the global localization problem in that the robot might firmly believe to be
somewhere else at the time of the kidnapping.• Solution: add a small number of uniformly
distributed, random samples after each estimation step.
• What happens in Markov localization?
20
Monte Carlo
• Adaptive Sample Set Sizes: many more samples are needed during global localization to accurately approximate the true density, than are needed for
position tracking.
• MCL determines the sample set size on-the-fly.
• Sampling is stopped whenever the sum of weights p (before normalization) exceeds a threshold T.
• MCL is an online algorithm. It lends itself nicely to an any-time implementation. The sampling step in MCL
can be terminated at any time.
21
Monte Carlo
22
Dual MCL
• Regular MCL first guesses a new pose using odometry, then uses the sensor measurements to assess the “importance” of this sample, dual MCL guesses poses using the most recent sensor measurement, then uses odometry to assess the compliance of this guess with the robot’s previous belief and odometry data.
• Different sampling model and re-weighting technique.
• A hard problem in robotics.
23
References
[1] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte carlo localization for mobile robots”. In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA), 1999.
[2] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte carlo localization: Efficient position estimation for mobile robots”. In Proceedings of the National Conference on
Artificial Intelligence (AAAI), Orlando, FL, 1999. AAAI.
[3] Sebastian Thrun, Dieter Fox, Wolfram Burgard, and Frank Dellaert, “Robust Monte Carlo Localization for Mobile Robots”, April 2000.
24
Thanks.?