ugv & auv slam and mapping

Post on 05-Jan-2017

286 Views

Category:

Documents

8 Downloads

Preview:

Click to see full reader

TRANSCRIPT

UGV & AUVSLAM and Mapping

Simon Lacroix Robotics and AI

LAAS/CNRS, Toulouse, France

With contributions from: Anthony Mallet, Il-Kyun Jung, Thomas Lemaire, Sébastien Bosch, Joan Sola, Cyrille Berger and Teresa Vidal … and others from the robotics and vision communities

UGV & AUVSimultaneous Localization

and Mappingand Mapping

with vision

2

Contents

•  UGVs & UAVs : outdoor (field) robotics •  Some computer vision •  Thoughts, beliefs, feelings, claims,

(parentheses)…

Why do we need maps ? •  To plan motions

–  evaluate/quantify possible motions •  To plan tasks

–  perception, exploration, surveillance –  in a multi-robot context

•  To achieve motions –  e.g. servo along a path/road/wall…

•  To interact –  With users and operators –  With other robots

•  To localize the robot (yes, this is the SLAM summer school) –  Missions defined in localization terms (“reach goal”,

“explore area”…) –  To ensure the proper execution of motions –  To ensure the spatial consistency of the maps

Why do we need maps ?

•  Geographic information systems: multi-layered maps

Why do we need maps ?

•  “Robot GIS”

n

Threats, targets…

6

Landmarks

5

Road network

4

Color and texture

3

3D geometric model

2

Digital terrain model

1

Why vision ? •  Cameras : low cost, light

and power-saving

•  Perceive data –  In a volume –  Very far –  Very precisely

1024 x 1024 pixels 60º x 60º FOV

⇓ 0.06 º pixel resolution

(1.0 cm at 10.0 m)

•  Stereovision –  2 cameras

provide depth ⊕ ⇒

•  Images carry a vast amount of information, at high rates

•  A vast know-how exists in the computer vision community

Micro UAVs, planetary rovers

Vision-based SLAM

–  Landmark detection –  Relative observations (measures)

•  Of the landmark positions •  Of the robot motions

–  Observation associations –  Estimation: refinement of the

landmark and robot positions

Functions required by any SLAM implementation :

Plus: –  Choice of the landmark representation

Vision brings something

Vision-based SLAM maps

Landmarks for estimation purposes: visual features (interest points, patches – here represented with the associated normal)

⊕ 

View-based representation for loop-closing detection purposes (image or places “indexes”)

Vision-based SLAM maps

⊕ 

View-based representation for loop-closing detection purposes (image or places “indexes”)

Landmarks for estimation purposes: visual features (interest points, patches – here represented with the associated normal)

Vision-based SLAM maps

⊕ 

Who wants such maps ?

Outline

UGV & AUV SLAM and Mapping

•  UGV Mapping

•  UAV Mapping

•  “&”

(A few words on stereovision)

13 

•  The way humans perceive depth

Stereo image pair Stereo images viewer Stereo camera

•  Very popular since the early 20th century

•  Anaglyphs

Polarization Red/Blue

(A few words on stereovision) •  In 2 dimensions (two linear cameras):

14 

Right camera

β

b

Right image

Disparity x

x =b

tan(α) + tan(β)Left camera

Left image α

Stereovision = depth by triangulation Two problems at hand:

•  Finding matches •  Determining the system geometry

(A few words on stereovision) •  Now with real images

15 

Le) image  Right image 

Where is   in  ? 

pr2 pr1 

(A few words on stereovision)

•  Geometry of stereovision

16 

z  x 

Ol Or 

pl 

P1 

P2 

pr 

(A few words on stereovision)

•  Geometry of stereovision

17 

z  x 

Ol Or 

pl 

pr 

“Epipolar geometry” 

(A few words on stereovision)

•  Finding matches

18 

… 3 6 3 7 9 2 8 7 6 8 9 6 4 9 0 9 9 0 …

… 3 5 7 4 9 6 3 9 6 5 8 6 3 0 1 9 7 5 …

Le) line 

Right line 

??? 

The matches are computed on windows 

Several ways to compare windows: “SAD”, “SSD”, “ZNCC”, Hamming distance on census‐transformed images… 

(A few words on stereovision)

19 

Original image  Disparity image  3D image 

640x480 @ 30Hz on a standard CPU 

(A few words on stereovision)

•  Error model

20 

)( cfd =σempirical analysis:

Maximal errors : 0.4m baseline: 2310 xx−≤σ

1.2m baseline: 2410.3 xx−≤σ

Online estimation of the errors

•  Errors on the 3D coordinates : 2xd

x dx α

σσ

α=⇒=

•  How to get an estimate of ? Analyze the correlation curves

σ d

(A few words on stereovision)

21 

•  Difficulties

1. “obstacle growing”

2. “wavelets”

Left mage 3D points

3. Discontinuity smoothing 4. Lack of data 5. Calibration 6. …

Digital terrain map •  DTM: on a regular Cartesian grid

22 

z = f (x,y)

Ground rover case:

•  Varying resolution

•  Imprecision on the data   uncertainties in the values   volumic occupancy grid ?

(Occupancy grids) •  Mapping with “occupancy grids”:

–  2D regular discretization of the environment –  Encodes the probability of presence of an

obstacle (“occupied”)

23 

Example with sonar data •  Sensor model: probability of range reading given known occupancy at a known distance :

P(m = dOi)

P(Oi m =d) =P(m = d Oi)P(Oi)

P(m = d Oi)P(Oi) + P(m = d O i)P(O i)

Sensor model Initial knowledge •  Bayes theorem

(Occupancy grids) Well suited for 2D mapping indoor environments with telemetry

Digital terrain map

•  Confidence model of stereo:

•  Model parametrization: (this is “only” a model) •  Turning the model into a pdf:

P(m /O)

P(O /m) =P(m /O)P(O)

P(m)=

P(m /O)P(O)P(m /O)P(O) + P(m /¬O)P(¬O)

•  Probability updates:

P(O /m1,m2) =P(m2 /O)P(O /m1)

P(m2 /O)P(O /m1) + P(m2 /¬O)P(¬O /m1)

Digital terrain map

Simulation results on a 2D profile:

One acquisition

Fusion of 36 acquisitions

Update strategy Bayes Dempster-Shafer

Not (yet) tractable for large 3D environments 

Digital terrain map

Practical implementation 1: •  simple statistics on the “population” of 3D points that fall in a cell •  a cell is : (z, σz)

Practical implementation 2: •  Each 3D point is associated to a surfacic patch, fused in the model according to a confidence value (Dempster-Shafer like approach) •  a cell is : (z, c, σz)

Digital terrain map •  Dealing with dynamic elements and verticals

DTMs are suited to represent the ground : z=f(x,y) (“2.5 D”) Label the patches according to , monitor the patches evolution over time

σ z

Digital terrain map •  Dealing with dynamic elements and verticals

DTMs are suited to represent the ground : z=f(x,y) (“2.5 D”) Label the patches according to , monitor the patches evolution over time

σ z

Digital terrain map •  But we took localization for granted ! What if the pose is

refined when closing a loop ?

Before loop closing  A2er loop closing 

?

Applying a deformaZon on the grid ? (what deformaZon ?) Storing all the acquired 3D images and re‐merging them ? 

Digital terrain map •  But we took localization for granted ! What if the pose is

refined when closing a loop ? “DenseSLAM”: Hybrid metric maps (Nieto&Nebot@ACFR)

Principle: anchor local dense maps to highly correlated landmarks

Global map  Local region 

Exploiting DTMs

•  To localize the rover –  Extracting landmarks from the DTM ? (e.g. peaks)

–  By correlating local DTMs (or raw 3D data wrt. the current DTM) ?

32 

t t+1

Exemple: minimizing the distance between the new 3D image and the current DTM (simplex algorithm, ICP…)

Distance as a function of X and Y

Exploiting DTMs

•  To determine/plan feasible trajectories

33 

Exploiting DTMs

•  To determine/plan feasible trajectories

34 

Evaluation of one position: convolution of the robot chassis wrt. the DTM

(parenthesis: this is “perception”)

•  To determine/plan feasible trajectories

35 

Evaluation of one position: convolution of the robot chassis wrt. the DTM

Traversability maps

36 

Traversability maps

37 

1. Discretisation of the perceived area

2. Probabilistic labelling

Correlated pixels Labeling

(top view) labeling

(sensor view)

Flat Obstacle Unknown

Traversability maps

2. Probabilistic labelling •  Attributes computed for each cell :

•  Point density •  Mean elevation and standard deviation •  Mean normal vector • …

•  Bayesian classification

P(Ci /A) =P(A /Ci)P(Ci)

P(A)

P(A /Ci) (usually not Gaussians) 

Traversability maps 3. Merging maps: application of the Bayes rule (3 classes here)

Traversability maps 4. Extension: introduction of color/texture attributes

(“map-free” motion generation)

•  Servo on particular perceived (“mapped”) elements – e.g. path following

41 

Perception = trail detection, from texture and color segmentation (Cf “LAGR” program)

Exploiting traversability maps

•  Trajectory determination •  Potential fields navigation

42 

Exploiting traversability maps •  Long term navigation. Knowing:

•  Where I have to go (my goal, my mission) •  What I know on the environment •  How I can know more on the environment •  How I can move (my motion modes)

Where should I head to ? How ? What for ?

43 

NavigaZon vs. ExploraZon ?  EssenZal informaZon in maps: the amount/quality of encoded informaZon 

Outline

UGV & AUV SLAM and Mapping

•  UGV Mapping –  DTM –  Traversability maps

•  UAV Mapping

•  “&”

DTM from aerial stereo imagery

•  Good projection characteristics (wrt. UGVs)

45 

2.4m stereovision bench, mounted on a blimp 

DTM from aerial stereo imagery

46 

DTM from aerial imagery •  Binocular stereo is a bad idea for UAVs “Multi-view stereovision” (@Onera)

47 

1.  Data acquisiZon 

2.  Data registraZon (full SLAM, bundle adjustment) 

DTM from aerial imagery “Multi-view stereovision” (@Onera)

48 

1.  Data acquisiZon 

2.  Data registraZon (full SLAM, bundle adjustment) 

3.  Building a DTM: •  For each DTM patch, discreZze 

the possible heights •  For each height hypothesis, 

recover the associated pixels in the sequence 

•  Analyse the pixels sequence to declare the “likelihood” of the hypothesis 

•  Detect occlusions, apply regularizaZon… 

DTM from aerial imagery “Multi-view stereovision” (@Onera)

49 

DTM from aerial imagery “Multi-view stereovision”

50 

3cm resoluZon DTM from low alZtude UAV (far from real‐Zme) (M.P. Deseilligny @ TeledetecZon lab, Montpellier) 

Traversability maps from aerial imagery

(what is a homography ?)

TransformaZon in the 2D plane: 

The relaZon between the images of coplanar points viewed from arbitrary camera posiZons is a homography 

If there is a homography that relates two regions of two images acquired with t≠0, the region is planar 

Traversability maps from aerial imagery

53 

1st approach Relies on homography computaZon ‐ no assumpZon (neither esZmate) 

of the camera moZon 

img1 

img2 

Traversability maps from aerial imagery

54 

1st approach Relies on homography computaZon ‐ no assumpZon (neither esZmate) 

of the camera moZon 

img1 

img2 

Traversability maps from aerial imagery

55 

1st approach Homography computaZon 

Traversability maps from aerial imagery

56 

1st approach Relies on homography computaZon ‐ no assumpZon (neither esZmate) 

of the camera moZon 

Traversability maps from aerial imagery

57 

Merging traversability map: use the associated orthoimage 

“True” orthoimages 

Traversability maps from aerial imagery

58 

2nd approach (@Onera) 

OpZc flow on the “H‐registered” images 

img1 

H(im

g2) 

Traversability maps from aerial imagery

59 

img1 

H(im

g2) 

2nd approach (@Onera) 

OpZc flow on the “H‐registered” images 

•  Hints on verZcal disconZnuZes •  Towards a DTM ? 

Traversability maps from aerial imagery 2nd approach (@Onera) 

OpZc flow on the “H‐registered” images 

Outline

UGV & AUV SLAM and Mapping

•  UGV Mapping

•  UAV Mapping –  DTM –  Traversability maps

•  “&”

Air/Ground cooperation •  Field robotics main missions / tasks:

•  Exploration, reconnaissance (information gathering)

•  Monitoring, surveillance •  Intervention (rescue, fire

fighting…)

62 

•  For all these tasks, air/ground robotics systems bring forth several operational and robotics capacities

Air/Ground cooperation

63 

•  Ground robots 

Good at:   Precise informaZon gathering   Physical intervenZon   Long duraZon missions   Heavy load carrying 

Not so good at:   Global informaZon gathering   Self localizaZon    High speed mobility   Avoiding hazards  

•  Aerial robots 

Good at:   Global informaZon gathering   High speed mobility    Avoiding hazards   CommunicaZon relaying 

Not so good at:   Long duraZon missions    Physical intervenZon   Heavy load carrying 

Air/Ground cooperation schemes

64 

UAVs assist UGVs •  LocalizaZon •  CommunicaZon relay •  Environment modeling 

UGVs assist UAVs •  Detect clear landing areas •  Carry on UAVs •  Provide energy support 

UAVs and UGVs cooperate to achieve a task •  ExploraZon •  Monitoring •  IntervenZon •  … 

Air/Ground cooperation: issues

65 

“Usual” mulZ‐robot issues: •  Task allocaZon •  Task planning (incl. communicaZons) •  CoordinaZon (supervision) •  Inter‐robot servoing •  …  Managing informaZon 

and decision sharing (incl. the operators) 

Environment modeling: at the heart of cooperaZon 

Air/Ground cooperation Illustration: rover navigation assisted by aerial observation

66 

Air/Ground cooperation Illustration: rover navigation assisted by aerial observation

67 

Air/Ground cooperation •  Environment modeling: at the heart of cooperation   Key prerequisite: register aerial and ground data/maps

68 

Air/Ground traversability map registration ?

69 

From the ground 

From the air 

Air/Ground orthoimages registration ?

70 

Aerial orthoimage  Orthoimage derived from ground DTM 

Air/Ground orthoimages registration ?

71 

Aerial orthoimage  Orthoimage derived from ground DTM 

Air/Ground data registration ?

72 

Air  Ground 

Air/Ground DTM registration ?

73 

Air  Ground 

Nice soluZon in [Vandapel‐Hebert‐IJRR‐2006] (spin images approach – cf difficulZes with peaks) 

Common landmarks

Air/Ground registration ?

74 

Landmarks Landmarks

UGV maps  UAV maps 

Air/Ground registration

•  Main issue: find common landmarks What information is really invariant wrt. viewpoints, camera

characteristics, environment type, sensor type ?

75 

  geometry 

  Building 3D models in a wholesome way 

Towards richer 3D SLAM

•  Visual line segments –  Robust extraction remains a

bit challenging –  (model-driven approaches

seem better than data driven approaches)

76 

Towards richer 3D SLAM

•  Visual line segments –  Robust extraction remains a

bit challenging –  (model-driven approaches

seem better than data driven approaches)

–  A good tracker/matcher is so desirable (SLAM will help)

77 

Towards richer 3D SLAM

•  Visual line segments –  Robust extraction remains a

bit challenging –  (model-driven approaches

seem better than data driven approaches)

–  A good tracker/matcher is so desirable (SLAM will help)

78 

Towards richer 3D SLAM

•  Monocular line segments SLAM –  Estimate the supporting line (not the endpoints) –  Plücker parameterization

–  Good properties for SLAM: simple expressions for frame transformations and observations

79 

Towards richer 3D SLAM

•  Monocular line segments SLAM –  Landmark initialization

80 

Towards richer 3D SLAM

•  Monocular line segments SLAM

81 

(monocular visual SLAM) •  Early work (Davison, 2003): delayed landmark

initialization

82 

(monocular visual SLAM) •  Early work (Davison, 2003): delayed landmark

initialization

83 

(monocular visual SLAM) •  Early work (Davison, 2003): delayed landmark

initialization A much better approach: inverse-depth parameterization

(undelayed initialization)

84 

Towards richer 3D SLAM

•  Monocular line segments SLAM –  Early work: delayed landmark initialization  A much better approach: “pseudo” inverse-depth

parameterization (undelayed initialization) (on going work)

85 

Towards richer 3D SLAM

•  Monocular line segments SLAM –  Early work: delayed landmark initialization  A much better approach: “pseudo” inverse-depth

parameterization (undelayed initialization) (on going work)

86 

Towards richer 3D SLAM

•  Stereo segments •  Planar patches

–  Extracted from stereo data using homographies

87 

Towards richer 3D SLAM

•  Stereo segments •  Planar patches •  Planes

•  Extracted from monocular sequences using homographies

88 

[SILVEIRA‐ICRA‐07] 

Towards richer 3D SLAM

•  Stereo segments •  Planar patches •  Planes •  Collapsing landmarks into 3D structures

89 

+  +  +  … 

Towards air/ground 3D SLAM •  Multi-robot SLAM: “various” loop closure events

1.  “Rendez-vous”: inter-robot relative localization 2.  Map matching 3.  Absolute localization: GPS fix 4.  Absolute localization: localization wrt. an initial map

  3D geometry is a key for 2. and 4.

•  On the estimation side: –  Distributed management of various maps

90 

Towards air/ground 3D SLAM •  Illustration. A mix of:

–  Multi-robot multiple local maps (akin to hierarchical SLAM)

–  Point landmarks, inverse depth parameterization

–  Line segments, stereovision

91 

Outline

UGV & AUV SLAM and Mapping

•  UGV Mapping

•  UAV Mapping

•  “&”

Maps, maps, maps…

93 

•  Geographic information systems

Maps of everywhere, available everywhere ! (googleEarth, virtualEarth, geoportail, Nasa WorldWind, flashEarth…) 

Maps, maps, maps… Mars images @ 0.25m resoluZon, DTM @ 1m (hirise.lpl.arizona.edu) 

Maps, maps, maps !

Maps, maps, maps…

!!

!!!!!!!!!!!"#$"%!&'$$"("!)!*+,-".+!/00'11'-2+3'.!

!

! 456!)!!7"8"!+7"!$"0+,8"#!28"!7"$-!3.!+7"!1'8.3.(!

! !

&'19,+"8!:2;#!)!<'8!+7"!9820+302$!#"##3'.#!3.!+7"!2<+"8.''.!

!

/&=6!>,3$-3.(! !

! !

! ?"-3.2!@'+"$!)!48"#".+"8#!/00'11'-2+3'.!

We must endow our robots with the ability to use these 3D geometry in one of the key

Summer School maps 

Questions ?

Answers !

top related