contributions to perception for intelligent...

38
1 1/38 [email protected] Olivier Aycard Associate Professor at University of Grenoble1 Laboratoire d’Informatique de Grenoble http://emotion.inrialpes.fr/aycard With contributions of Julien Burlet, Trung Dung Vu & Manuel Yguel Contributions to Perception for Intelligent Vehicles

Upload: others

Post on 30-Jun-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

1

1/[email protected]

Olivier AycardAssociate Professor at University of Grenoble1

Laboratoire d’Informatique de Grenoblehttp://emotion.inrialpes.fr/aycard

With contributions of Julien Burlet, Trung Dung Vu & Manuel Yguel

Contributions to Perception forIntelligent Vehicles

Page 2: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

2

2/[email protected]

Introduction

What is an Intelligent Vehicle ?

� An Intelligent Vehicle is a vehicle designed to:

� monitor a human driver and assist him in driving;

� drive automatically.

� To solve these tasks, an intelligent vehicle is equipped with sensors to perceive its surrouding environment and with actuators to act in itsenvironment.

Sensors Actuators

Model of the environment

PerceptionPerception Plan of future actions

Plan of future actions ControlControl

Electric cycab demonstrator (INRIA)

Page 3: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

3

3/[email protected]

Introduction

Goal

� Vehicle perception in open

and dynamic environments

� Laser scanner

� Speed and robustness

Present Focus: interpretation of raw and noisy sensor data

� Identify static and dynamic part of sensor data

� Modeling static part of the environment

� Simultaneous Localization And Mapping (SLAM)

� Modeling dynamic part of the environment

� Detection And Tracking of Moving Objects (DATMO)

Page 4: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

4

4/[email protected]

Problem statement

PERCEPTION Static Objects

Vehicle State

Moving Objects

Perception

Measurements

Vehicle Motion

Measurements

M

X

O

Z

U

SLAM

Static environments

),|,( UZMXP ),|,,( UZOMXP

SLAM + DATMO

Dynamic environments

SLAM + Moving object detection

),|,( )(UZMXP

s

)()( dsZZZ +=

),|,( )(UZMXP

s

)|( )(dZOP

Page 5: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

5

5/[email protected]

Introduction- Part I -

SLAM withMoving object detection

- Part II -

SLAM + DATMO- Part III -

Outline

Conclusion & Perspectives- Part IV -Experimental results on real vehicles

illustrate SLAM+DATMO theoretical contributions

Page 6: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

6

6/[email protected]

IntroductionSLAM +

Moving object detectionSLAM + DATMO

Part II. SLAM withMoving Object Detection

Conclusion & PerspectivesTwo contributions:

1. A fast and robust method to perform SLAM + moving object detection(PhD Thesis of VU 2009)

2. A new method to map 2D/3D environment(PhD Thesis of YGUEL 2009)

Page 7: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

7

7/[email protected]

Map representation

Point cloud map [Lu’97]Feature-based map [Leonard’91]

Occupancy Grid(OG)-based map [Elfes’89]

-Model Size

+Environment representability

+Model Size

+Environment representability

-Environment representability

Page 8: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

8

8/[email protected]

SLAM

Incremental mapping [Elfes’89,Thrun’00]

inverse sensor model a priori map

Maximum Likelihood Localization [Vu’07]

)(log),|(log

),|(log),|(log

0

1:11:11:1:1

cCOzxcCO

zxcCOzxcCO

ttt

tttttt

=−=+

=== −−−

),|( :1:1 ttt zxcCP =

Page 9: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

9

9/[email protected]

Incremental grid mapping [Elfes’89,Thrun’00]

Update a 2D grid with several 1D laser beams

� Position of the vehicle is known

� Find the cell (x,y) in the 2D grid corresponding to an observation zt� Update all the cell on the segment from the vehicule position to (x, y)

using Bresenham algorithm and an inverse sensor model:

� Decrease probability of occupancy before zt� Increase probability of occupancy close to zt� Do nothing for other cells

� Incremental process: at each time, new observations are used to update the grid

)( ,

t

yxzcP

tz

Inverse sensor model

Position of the vehicle

Position of zt

Page 10: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

10

10/[email protected]

P(.) = 0.21 P(.) = 0.92 P(.) = 0.17

Example of Maximum Likelihood Localization [Vu’07]

Page 11: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

11

11/[email protected]

SLAM + moving object detection

Incremental mapping [Elfes’89,Thrun’00]

Maximum likelihood Localization [Vu’07]

Moving object Detection

� Inconsistency between OG and observations allows deciding a measurement belonging to a static or dynamic object

� Close points are grouped to form objects

freeoccupied

inverse sensor model a priori map

)(log),|(log

),|(log),|(log

0

1:11:11:1:1

cCOzxcCO

zxcCOzxcCO

ttt

tttttt

=−=+

=== −−−

),|( :1:1 ttt zxcCP =

Page 12: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

12

12/[email protected]

Experiments

� Daimler Demonstrator (european project PReVENT) [Vu’07]

� Laser scanner: resolution: 10, range: 70m, FOV:1600, freq: 40Hz

� Velocity, steering angle

� High speed (>120km/h)

� Camera for visual reference

� Different scenarios: city streets, country roads, highways

� Volkswagen Demonstrator (european project Intersafe2) [Baig’09]

� SICK laser scanner: resolution: 10, range: 80m, FOV: 1600, freq: 37.5Hz

� Odometry: rotational and translational speed

� Camera for visual reference

� Urban traffics

Stereo vision camera

Laser scanner

Page 13: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

13

13/[email protected]

Results - SLAM + Moving object detection

• Execution time: ~20ms on a PIV 3.0GHz PC 2Gb RAM

• Daimler demonstrator

Page 14: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

14

14/[email protected]

Some limitations of Occupancy Grid

� Representation of environment using Occupancy Grid (OG)

� In 2D large scale environments or 3D environments, most of the space is empty

=> Storage of full OG is useless

� During update of OG

=> most of time is dedicated to update empty cells

� A cell only contains information about its occupancy

� All the information concerning the cell content shape is lost

� Multi Scale Representation: usefull for some tasks

� Use coarse maps to obtain rough estimation of position (localization) or trajectory (path planning)

� This rough estimation is used to initialize estimation at fine scale.

=> Multiscale Gaussian Maps have been defined to overcome these limitations

Page 15: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

15

15/[email protected]

Multi Scale Gaussian Maps - Summary

� Representation of environment

� 3 scales are defined: fine, intermediate & coarse

� Each scale is a sparse grid where only occupied cells are stored

� A sparse grid is encoded using a hash table

� A cell contains two kinds of information

1. The cell content shape

� represented by one or more gaussians

2. The occupancy of a cell

� deduced from the shape of the cell

=> No processing is done to update occupancy of a cell

� 2 processings are performed on Multi Scale Gaussian Maps

1. Incremental mapping using observations (performed at each step)

2. Map refinement (performed periodically)⇒ improve the quality of the representation by addition of some gaussians

in different cells

� See PhD Thesis of Manuel Yguel (2009) for more information

Page 16: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

16

16/[email protected]

Multi Scale Gaussian Maps - Example

Coarse scale: one gaussian per cell

Fine scale: one gaussian per cell

Intermediate scale: one gaussian per cell

Simulated 3D environment

� Only shape is shown

Page 17: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

17

17/[email protected]

Multi Scale Gaussian Maps – Incremental mapping

{ } tki

p

tki

cwwwzdn cell theof rsmean vectogaussian k theare ,..., where),(minarg 1..1=

p

tz

( ) )(p

twtww zmnnn

−+← µεµµ

� Incremental mapping

1. Find the cell ct that contains the observation

=> Only cells that contain an observation are updated

2. In the cell ct , find the gaussian n having the minimum distance to that observation

3. Update of the parameters of the gaussian n

� ε(ct) is a learning rate used to control the update

� It is function of the occupancy of the cell

( )[ ]nnnnn w

p

tw

Tp

twtww zzm Σ−−−+Σ←Σ )()( µµε

Page 18: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

18

18/[email protected]

Multi Scale Gaussian Maps – Map refinement

[ ] { }

{ } scalefiner at toingcorrespond cell theof rsmean vectogaussian l theare ,..., and

cell theof rsmean vectogaussian k theare ,..., where)()())(1(1

)(

''

1

1

1 1

1''

cww

cwwcN

cE

l

k

k

i

l

jwww

T

wwjijji∑∑

= =

− −Σ−−← µµµµε

{ }{ } cwwcEcE k

ww k

cell theof rsmean vectogaussian 1)(k theare ,..., where)(minarg)( 11,... 11

+← +

+

� Map refinement (performed periodically, ie. not at every step)

� To improve the quality of the representation, one gaussian is added to a given cell

1. A measure of representation error is computed for each cell c

2. One gaussian is inserted to the cell with the maximum error

3. Reestimation of gaussians in the cell c to minimize E(c)

=> use of k-means algorithm[Llo82]

Error at fine scale Error at coarse scale

Page 19: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

19

19/[email protected]

Experiments

� SICK (IBEO) Dataset

� Localization was provided

� Offline data

� 3D Laser scanner mounted on a vehicle: 6 millions of 3D points in a global cartesian frame

� Camera (mounted on the same vehicle) synchronized with laser: RGB component associated to each point

� Modelization of an urban scene

� Results

� 6D Multiscale gaussian map: 700m (length) x 200m (width) x 50m (height)

� 3 scales:

� Fine scale: size of cell: 0,2m

� Intermediate scale: size of cell: 3.2m

� Coarse scale: size of cell: 12.8m

Page 20: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

20

20/[email protected]

Results – 3D(+3D) mapping

� Representation with less than 0.1% of original data

� Color is used as an extra dimension for clustering

+Model size

+Environment representability

Page 21: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

21

21/[email protected]

IntroductionSLAM with

Moving object detectionSLAM + DATMO

Part III. DATMO

Conclusion & Perspectives

Two contributions:1. A complete framework to perform DATMO and classification

(PhD Thesis of VU 2009)

2. A method to learn models of dynamic of tracked objects(PhD Thesis of BURLET 2007)

Page 22: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

22

22/[email protected]

Predicted state

of known objects

New observations

Gating

Association refinement

- nearest neighbour- probabilistic techniques : JPDA, MHT

- etc

?

?

Object refinement - Confirmation

- Destruction- Creation

Detected moving objects need to be tracked: filtering techniques (KF, PF)

Difficulties:

- highly dynamic environments;- number of objects is unknown;

- objects could enter/leave the observation space or be occluded;

- sensor faults (missed detection, ghosts);

Tracking of Moving Objects: example

Page 23: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

23

23/[email protected]

DATMO – known problems using laserscanner

� Objects are represented by groups of points

� Tracking groups of points leads to a degradation of tracking results

� Object splitting (occlusions, glass- surfaces) makes the tracking harder

=> Using object models to overcome these problems

Page 24: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

24

24/[email protected]

DATMO – Our approach

• Considering data sequence overa sliding window of time

• Maximizing a posterior probability

• Interpretation of moving objectsand their trajectories from a laser sequence

=> Simultaneous Detection, Classification and Tracking of Moving Objects

is a trajectory of object models

Page 25: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

25

25/[email protected]

Representation and exploration of space of movingobject hypothesis

� Define object model

� Box model to represent cars, trucks or bus and motorcycle

� Point model to represent pedestrian

� Incremental build of the graph of hypothesis

� Exploration of the graph of hypothesis: hypothesis space is huge !!!

� use of sampling techniques (MCMC)

Moving object hypothesis generated over

a sliding window of time

Incremental graph of hypothesis

t

t-1

t-2

Page 26: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

26

26/[email protected]

Evaluation of a hypothesis knowing observations

• MAP estimate:

Prior model: Likelihood model:=> Add some apriori => Evaluate likelihood of

constraints on individual objects observations knowing hypothesis

temporal motion

consistency

Reward Penalyze

Page 27: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

27

27/[email protected]

DDMCMC: example of resulting solution

Page 28: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

28

28/[email protected]

Experiments

� Navlab Dataset (CMU) [Vu’09]

� SICK laser scanner: resolution: 0.50, range: 50m, FOV: 1800, freq: 37.5Hz

� Odometry: rotational and translational speed

� Camera for visual reference

� Real-life urban traffics

Page 29: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

29

29/[email protected]

Experimental Results: video demo

� Execution time: ~120ms on a PIV 3.0GHz PC 2Gb RAM

Page 30: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

30

30/[email protected]

Define model of dynamic of moving objects

� A model of dynamic is required to track an object

Estimation of position at time t

Xt Pt+1

Prediction of position at time t+1

� For highly dynamic objects, a multiple model solution is needed

� Several models of dynamics are run in parallel

� The outputs of each model are fused

� Models of dynamic and transition probabilities between these models (ie, interactions) are usually given apriori

� Wrong choice of models of dynamic and interactions leads to problems during tracking

Ot+1

Observation at time t+1Xt+1

Estimation of position at time t+1

� => Learning is required to determine the models of dynamic and their interactions

Page 31: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

31

31/[email protected]

Learn model of dynamic of moving objects

1. Use a set of predefined models of dynamic (as exhaustive as possible)

2. Use trajectories of past tracked objects to learn the interactions between models

3. Analyze the interactions between models to extract a subset of the initial set of predefined dynamic models

� See PhD Thesis of Julien Burlet (2007) for more information

Page 32: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

32

32/[email protected]

Experiments

� Parknav platform (National Project PUVAME) [Burlet’06]

� A set of outboard camera observing a parking

� People present on this parking are detected and tracked

� Collect their trajectories on this parking

� Results

� Define a set of models of dynamic

� Learn the interactions between these models: online process

Page 33: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

33

33/[email protected]

Results

• Green: real trajectory (ground truth)

• Red: estimated trajectory with learned interactions

• Blue: estimated trajectory with apriori interactions

Page 34: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

34

34/[email protected]

IntroductionSLAM with

Moving object detectionSLAM + DATMO

Part IV. Conclusion & Perspectives

Conclusion & Perspectives- Part IV -

Page 35: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

35

35/[email protected]

Conclusion

� Simultaneous Localization And Mapping (SLAM)

� A fast and robust method to perform SLAM + moving object detection (PhD Thesis of Vu 2009)

� A new way to map environment: Multiscale Gaussian Maps (PhD Thesis of Yguel 2009)

� Detection And Tracking of Moving Objects (DATMO)

� A complete framework to perform DATMO+classification (PhD Thesis of Vu 2009)

� Learning of models of dynamic and their interactions (PhD Thesis of Burlet 2007)

� Theoretical contributions validated on real vehicles: no simulated data !!!

� 6 Journal Articles and 27 Conference Articles (mainly IEEE), 1 Keynote Speech at IEEE ICCP 2010

� 1 national project (PUVAME), 2 european projects (PReVENT & INTERSAFE2) and 1 industrial cooperation (SICK)

� 7 Post Doctoral Students

Page 36: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

36

36/[email protected]

Short term perspectives

� Environment Representation and Interpretation (PhD Thesis of Azim)

� Environment representation

� Interpretation of Environment

� Frontal object perception (PhD Thesis of Chavez)

� DATMO of [Vu’09] is not real time

� Static objects should be taken into account

� Sensor Data Fusion (PhD Thesis of Baig)

� Fusion between Laser scanner and Vision

� Compare different strategies for fusion

=> European project Interactive (1/2010-7/2013)

Page 37: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

37

37/[email protected]

Middle term perspectives

� Learn to predict trajectories

� Participation to PhD Thesis of Vasquez’07

� Past trajectories are used to learn typical motions patterns

� Motions patterns are used to predict future trajectories

� Integration on knowledge about Environment and Moving objects

� Ambient Intelligence and Perception

� Sensors are present everywhere

� Distributed Perception and Heterogeneous Sensors

� Ubiquitous Robotics

Page 38: Contributions to Perception for Intelligent Vehicleslig-membres.imag.fr/aycard/html/HDR/research.pdf · Localization was provided Offline data 3D Laser scanner mounted on a vehicle:

38

38/[email protected]

Thank you !

Questions ?