computer vision for autonomous navigation · used in navigation. stewart: underwater vehicle with...

112
COMPUTER VISION FOR AUTONOMOUS NAVIGATION June5, 1988 Martial Hebert Carnegie- Mellon University

Upload: others

Post on 25-May-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

COMPUTER VISION FOR AUTONOMOUS NAVIGATION

June5, 1988

Martial Hebert Carnegie-Mellon University

Page 2: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

I

Page 3: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Content

1. Perception and mobile robots

2. Sensors

3. Representation of perceptual information for mobile robots

4. Detailed description of an autonomous ve- hicle: the NAVLAB

1

Page 4: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 5: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Degrees of Autonomy

Direct Operation

Teleoperation

upervised Operation

Autonomous System

0 0 0 0

0 0 e

2

Page 6: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 7: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Applications of autonomous mobile robots (incomplete list, in no particular order)

Indoor navigation (SRI, Stanford, INRIA, LAAS, CMU)

Indoor cleaning (LIFIA, LAAS)

Construction (Rex Excavator (CMU))

On-road navigation (ALV project (CMU, Martin Marietta, Maryland Univ.))

Chauffeur (Munich Univ.)

Undersea exploration (UNH, Texas A&M, NBS (MAW))

Off-road exploration && transportation (Hughes AI, CMU, Martin Marietta (ALV), Ohio State walking machine)

Planetary explorer (CMU, JPL) Surveillance (CMUDenning Mobile Robots)

1

Page 8: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Components of a mobile robot

Navigation Sensed world

3-D world

Predictions Speed Miss ion Steering

World model I Vehicle I control

I Preloaded I knowledge

Page 9: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

PERCEPTION FOR NAVIGATION

Find

Track road Find Intersection

Page 10: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

MODELING

Observations from multiple sensors and positions

Page 11: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Challenges for perception

0 Uncertainty: Errors may accumulate as the vehicle moves if perception uncertainty is

A A

not explicitly represented.

- Mobility: Variability between vations.

3-D: Obstacles, terrains, ..etc.

sensor obser-

are part of a 3-D world.

1

Page 12: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Sensors

Passive:

-Video camera - Passive stereo vision

Active:

- Sonar -Laser range finder

1

Page 13: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Video cameras

Context: a

a

a

Object tracking: Blob or edge extraction, vehicle tracking &om image to image through feedback to vehicle controller.

Indoor navigation: Edge detection, inter- pretation in highly constraint environment (e.g. Vertical and horizontal edges of walls and furniture).

Road following:

- Road edges extraction and tracking. -Road region extraction for color data.

Main problem: Calibration + A camera provides only 2-D

information whereas the vehicle moves in a 3-D world.

1

Page 14: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Calibration problem

2-D irnaee

Possible solution: Camera model

Flat ground plane Vertical objects

Page 15: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Example: Road following (U. Maryland) I

f

/

Original image (intensity, 256x256)

Edge extraction in small windows (up to 64x64)

Predictions of window locations in image I

Vehcle

Road model . 4 4 m world

1

1 I

Page 16: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Figure 2c. The original image, along with the windows and located road boun- daries

Page 17: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Passive depth estimation

The location of a point can be computed from its projections from differents viewpoints. From different cameras (stereo), or from dif- ferent positions (e.g. motion).

OHOW to

What to

find good matches between images

match ? (features vs. pixels)

Binocular vs, trinocular ?

How to use the vehicle’s motion ?

1

Page 18: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

PASSIVE STEREO

Y t z I

Object in world

Direction from left

Direction from right

View from right image

rn la- View from left image - x = f.X/(f - Z) Disparity

1 y = f.Y/(f - Z)

Page 19: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Finding good matches

Similarity measures

Epipolar constraints

Ordering constraints

Coarse-to-fine

Small range of disparity values

0 Overdetermination: trinocular techniques

2

Page 20: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

a

a

a

0

a

a

Example: Mobi (Stanford)

Goal: Navigation in hallways

Two-cameras stereo

Features: vertical edges

Initial matches: similarity of grey level curves around edges

Local consistency: similarity of grey level curves between edges

Constraint propagation

i

1

Figrrc 4. Stereo match proposals and grey level curves

Page 21: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

X~~nocular stereo

A

f

I

L3 1

A1 and A2 are images of the same point only if there is an A3 at the intersection of L31 and L32.

+ No search.

Page 22: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

I

Page 23: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Using vehicle motion

Use depth estimates from previous positions to:

Predict matches and reduce the search

Improve (reduce uncertainty) of current depth estimates

1

Page 24: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

f Left image

FIDO (CMU) Right image

Interest points in left image

Prediction boxes for corresponding pomts in nght image

Matching

Computation of 3-D position

Page 25: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 26: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Pixel-based (or iconic) techniques

imagek-11 disparity image~l Ccxrcwon Integration

(Example: Matthies' depth from motion)

disparity disparity

variance Regularization ", .ante c

L an >

0 Do not match features

4

predicted '

I I disparity Y

Correlate images directly

I I I Motion I

o f t mdft-1. intensity image

Rediction Varial lCC

d: disparity at pixel

Minimum gives best d and uncertainty VU@). Depth map is updated over time:

Figure 2: ImNc deph estimation block diagram

3

Page 27: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 28: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Uncertainty from passive depth estimation

Right camera

a

Page 29: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Sonar

Time-of-return of sound wave. Typical characteristics:

Single depth measurement

Limited to 30 ft.

30° field of view (Polaroid)

Low data rate

Low cost

Applications :

One unit:

-Soft bumper - Surface (e.g. wall) tracking

Sonar ring:

- Obstacle avoidance -Map building (More later on that)

1

Page 30: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Sonar nng (Typically 24 Polaroid units)

Empty region (increased resolution due to overlap)

Emntv

Vehicle

Page 31: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Figure 2: Thc Neptune mobilc robot, with a pair of c a m e m and h e sonar ring. For cxpcri1ncnt.s in scn.wr inicgntion. Ihc c a m c m wwc mounrd lowcr, so Iliai ilicir horizon linc w ~ ~ l l t l hc cltfic to ihc cruss-.~i~onal vicw providd hy ilic sonar ring.

Page 32: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Exceptions

FMC sonar for outdoor vehicle: 30KHz, 64 ft. range, 16 x 24 image

2

Page 33: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Exceptions

Underwater sonar imaging

FIG. 1. Underwater robot EAVE-East (Experimental Autonomous Vehicle) in its launch cradle at the Uni- versity of New Hampshire. The large white cylinders contain electronics: the smaller. lower white cylinders hold batteries: and the darker cylinders are thrusters.

3

Page 34: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Acoustic Scan

.

1

i

Random errors LMS line fit Specular reflections Double reflections Large angle of incedance Reflections in a Dihedral

Page 35: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Laser range finders

Distance measurement by projection of a sin- gle laser beam. Two types

Triangulation (Simple design but calibration -

problems on a mobile platform)

Time-of-flight (Self-contained unit + no cal- ibration problems, better accuracy but state of the art technology)

Characteristics:

Indoor or outdoor

High resolution images (either 1-D or 2-D)

Fast

High resolution depth measurement

High cost, fragile, powerresolution/speed tradeoff

Sensitive to material properties (e*g* spec- ular materials)

1

Page 36: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Laser range finders: I Triangulation I

.on

Laser I

Resolution/power/time tradeoff

Page 37: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Example: HILARE (LAAS)

s ize : 1.10 x 1.10 x 0.70m w I q h t I LO0 kq

speed : 1 m / s e c max

wheel

caster

i ! *. .. .,...._,.I

! ... *

. . ..

. ..... ..:.

Figure 3 a : Actual data

0, . . . . . . , . . . . . .

i

Figure 3 b : on-line polygonal representation

Figure 1 Figure 3 : Environment circular scanning

with the Laser range-finder

Page 38: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Example: Environment a1 Research Institute of Michigan (ERIM)

Time-of- flight

64 x 256 range images, 30° x 80' Reflectance image

8-bit range fiom 0 to 64 feet (3 inches res- olution)

Similar devices: Odetics, ASV sensor for Ohio State Univ. walking machine.

Range image

Reflectance image

Page 39: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

7

Page 40: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Representation of perceptual features for mobile robots

1. Representations of uncertainty and sensor 8- hmon

2.

Special

Terrain

representations :

maps

3. probability maps

1 .

Page 41: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

SENSOR FUSION: How to combine observations from different sensors (e.g. passive stereo and active sonar)

Feature observed by three different sensors

Veh

Uncertainty ellipses

Page 42: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

MOTION FUSION: How to merge the two observations?

Feature observed from

Feature observed from position 2.

\ Position 2

Uncertainty ellipses

Page 43: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

TRADITIONAL APPROACH

Batch of observations (sensor measurements)

\ Resulting uncertainty ellipse

Best estimate of observed feature

Page 44: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

K A L W FILTERING

New observation:

Sensor

Current estimate: X c t

New estimate:

c !

Page 45: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Linear Filtering

Where:

z is a I x 1 measurement vector,

X _ is the n x 1 vector to be estimated,

and 2 is a random additive measurement er- ror with I x I covariance matrix R.

The estimate of x is given by:

The best estimate is: T -1 -1 T -1 x- ^ - ( H I ? H ) H R y

Justification: Maximum likelihood approach:

1

Page 46: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Bayesian approach

Best estimation according to minimum vari- ance criterion:

minl(2 - z) T (2 - z)p(zlz)dx

Solution is:

In case of a linear model, the best estimate is:

-1 T -1 -1 T -1 ^ - ( P o + H R H ) H R y - x- Where p0 is the a priori covariance matrix of

2

Page 47: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Recursive filtering

New problem:

We want to find the best estimate zk+l based -on a new measurement zk and the previous es-

Recursive solution:

Where K k is a gain matrix given by: -1 Kk = pkHkT+l(Rk+l +*k+l P k HT k + l )

And the covariance matrix of &+I is updated by:

3

Page 48: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Extended Kalman filter New problem: we have an observation z that

depends on the a parameter vector a by the relation:

f (z, a) = 0 and z = z/ + C, where g is additive noise. Given an estimate a* and a new measurement z what is the new estimate of a?

Linearization of f

Linear measurement equation (as before):

where: df * y = +(&,a*) + -a da- -

and df

4

Page 49: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

4 Building maps from matching line segments: ( Faugeras, Ayache, INRIA)

Vehicle position i

I Observations I

Kalman filter Observed set of 3-D line segments , Si, observed at position i.

Updated map of 3-D line segments

Page 50: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

-!ication to line matching

xented by the intersection of

x = a z + p

7 0 segments Si ( i = 1,2) supported -3s Li Of parameters (ai, bi,Pii qi ) with 2 matrix (from trinocular stereo) Ai sformation T from frame 2 to frame -ariance matrix A , decide whether:

S? are two instances of the same segment, and

3mpute the uncertainty (Le. the co- 3 J matrix) of the fused segment.

5

Page 51: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

The EKF is used to compute:

1 . The estimate of the transformed by T of s2, LQ = (9, b/2,P'2,4 I 2 ), and

2. The covariance matrix computed from

The two lines are matched if

is greater than a threshold s corresponding to a probability of 95% ( x 2 ) distribution.

6

Page 52: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Figure 1 4 Edge segm~mr mpohcd in ~ P I C ~ pair of Figure 12

& t

1

[ )&? j j & J Figure 18: Covpivlce manices a0;rChai to che endpoinu of the rsonzcnrctcd

segments of the offiic mom observed in position 1

Figure 13: Polygonal rppraxkdoa of the edge points of a slcrro pair of h e same offke rarrnobrcmd ia position 2

I

Figure IS: Edge segments maahed in s t ~ r r ~ pair of Figure 13

F i k 17: Haito&rl and nniul p j d c m of the reconstructed segments oftbe dAa mom obrsvsd in position 2

Figure 23: Fusion m position 2 of rhc segments ~consaucted in posifions 1 md 2

Page 53: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

ELEVATION M A P S

Sensor

Elevation Uncertainty Slope Curvature

Discrete grid cell (value = traversability cost for path planner)

/ Reference ground plane

Page 54: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Sensor

Sensor

6 Visible regions

Page 55: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Application: cros s-countrv navigation.

(K. O h , D. Payton, Hughes AI Center)

Goal: Real-time perception for cross-country autonomous navigation. Local navigation uses an initial map-based plan.

Vehicle: Martin Marietta 8-wheels vehicle.

Sensor: ERIM laser range finder (delivers 64x256 range images, range is from 0 to 64 feet).

Environment: Outdoor rugged cross-country terrain, including bushes, gullies, rocks, and steep slopes.

.

Page 56: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Perception cycle for cross-country navigation

Obstacle map

Range image from I laser range finder

4

I 1 Sparse elevation map I f I Smoothed dense map 1

I Fused maps

Vehicle model f Trajectories I

Position sensors f

Strategies -4 Navigation 1

Page 57: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Vehicle model:

Normal Bad suspension Slope Clearance

Navigation:

Elevation map

Vehicle model

Predefined trajectories

Page 58: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

---.I__..--- I .. . . . . . . . . . .

Figure 4. 3D view of CEM.

Figure 8. CEM from Figure 3 with curved trajec- tories.

Page 59: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Occupancy maps:

( Moravec, Elfes (CMU), Stewart ( M I / Woos Hole))

General description:

The world is represented by a set of regularily spaced cells (grid).

Each cell contains a probability P.

P is the probability that the cell is part of an object.

Each sensor rneaurernent is modelled as a distribution of probabilities over the grid.

The probability at a grid cell is computed by combining the contributions from many sensor measurements.

Page 60: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Is this grid cell occupied? visible? empty?

With what probabhk?

Field of View

Discrete Grid

- Vehicle Locations

Page 61: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

3-D OCCUPANCY GRIDS 6 degrees of freedom: R, Tx, Ty, Tz

Sensor

\

Measurement cone

Page 62: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Applications:

Moravec/El.fes/Matthies :

Indoor robot with sonar and stereo cameras. Occupancy map building from both sensors. Used in navigation.

Stewart:

Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's position sensors. Extraction of surfaces from the 3-D mid.

U

Application: detailed bottom surface mapping.

Advantages:

Sensor model taken into account.

Feature

Natural

extraction is not necessary.

way of fusing multiple sensors.

Page 63: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

.

Computing the probabilities

Problem: Define the probability that a given cell is in state si, given evidence (Le. measure- ment) e. Bayesian model:

P(0CC

P(R~OCC) and P(R(EMP) are given by the sen- sor model, P(OCC) and P(EMP) are the cur- rent probability values in the map. Initially: P(0CC) = P(EMP) = 0.5 (no information).

1

Page 64: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Computing the probabilities

The simplifying assumption: P(R~EMP) = 1 - P ( R I OCC) provides intuitive properties:

Updating independent of the ordering of mea- surements.

~ ( O C C ) = 0.5 (unknown) is a no-op. Conflicting measurements cancel.

2

Page 65: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

I

Probabilistic sonar map (from Elfes && Matthies)

Probability

Y'

Measured range b

1 Occupied

I \

/

D

X Sensor

. .

Page 66: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

I

Figure3 Probabilistic S o w Sensor Intcrprcution Modcl. Thc probability prcifiie shown cormponds to a d i n g urkcn by a scnsor positioned at the upper left. pointing to thc low- righr The plane shows thc uM(" kvel. Values above the plane rrprescnt 0CtL'pfE.D probabiliucs, and values below npretent E.W probabililiu

Page 67: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Uncertamtv from stereo J

Stereo cameras

P(0CCIR) = P1 + P2 - P1 . a

PI = 0.5 = unknown probability level P2 = maximum allowed P(.OCC) a = 1 .O at edge points, < '1.0 elsewhere

Page 68: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 69: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

*

Sonar

+

Stereo

iC)

Integra tion

p:i& ......... .. , 22-j . . . . . . . . .... .......... Q 1: : c : : : :

..*.. :::: :*.

. . . . . .

y .. .. ..

Figure 6: Occupancy Maps Csncratcd by Sonar, Stcrco and Scnsor InlCgnlion. OCCUPIED rcgions arc rnarkcd by shadcd squues. EMPTY by do& fading 10 whitc space, and Uh'KNOWh' by + signs.

732

1.

Page 70: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

262

Fig. 13. shown

IEEE JOURNAL OF ROBOTICS AND AUTOMATION. VOL. RA-3, NC. 3. JUNE 1987 ’

.

.......... .......... .......... ........,.

Example run. This run was performed indoors, in Mobile Robot Lab. Distances arc in ft. Grid size is 0.5 ft. Planned path is as dotted line. and route actually followed by robot as solid line segments. Starting point is solid + and goal, solid x .

Page 71: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Detailed description of an autonomous vehicle: the NAVLAB

Self-contained vehicle for:

Road following with or without map.

- Object detection.

Map building.

using:

One color camera.

One laser range finder.

On-board computing (Suns).

Controller.

1

Page 72: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

I z w

Page 73: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

I

Camera

polyhedron r v l

Page 74: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Object Recognition

.

Motion Planning

-

landmark

tree

Page 75: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Tokens Tokens

I Blackboard Manager

I I

0

0

0

Moving Coordinates

Time

Distributed Processing

Page 76: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Environment

Paved curved roads.

Non-uniform road appearance.

Changing conditions (illumination, weather. . .etc).

Discrete obstacles.

Strong shadows from obstacles.

2

Page 77: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 78: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Components of the road following algorithm

Color classification: The color of the road is significantly different than the background.

Texture computation: The sides of the road are usually more textured.

.Road location in image: The geometry of the road must be determined.

Calibration: The road detected in image space must be converted to the vehicle’s co- ordinate system in order to steer the vehicle.

3

Page 79: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Image

Non-road Model Appearances

(RGBT)

f L

Roamon-road 4 Classification 1

v Road b osition Hou hfor

& Orientation

Road Model Width

Position Orientation

Surface Appearance (RGBT)

I

Self Clustering &

Update I

- ' I I Vehicle Motion

Page 80: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

.

Color classification

The color is divided into n classes. Each pixel is classified into one of the classes using the distance:

Where:

x is the (red,green,blue) vector at the current pixel.

mi is the mean (red,green,blue) value of class 1.

ci is the covariance matrix of class i.

The set of classes is divided into road classes and non-road classes.

4

Page 81: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 82: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 83: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Intensity image (480x5 12)

high-frequenc y gradient (240x256)

Low-frequenc y gradient (60x64)

1 Average I

I -

value (120x128) 1

Normalized gradient hf gradient

I a If gradient +p .average value (480x5 12)

Binary image of micro edges (480x5 12) -

I

1 Image of edge density (3 0x3 2)

I

I I Texture classification Road/Non-Road

Page 84: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 85: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 86: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Combining texture and color

For each class and each pixel:

Pi = (1 - a)Pi T +aPi C

Where -

Pi = confidence that the pixel belongs to class i

m

~ f ' = confidence that the pixel belongs to class i based on texture

class i based on color

-.

.Pc I = confidence that the pixel belongs to

Final confidence for each pixel:

C = ma~(Pi, i f Road) - m=(Pi, i E NonRoad)

c > 0 e the pixel is classified as road.

5

Page 87: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 88: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Road representation

Vanishing line

shing point

Page 89: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Hough transform

I

Horizon point

Page 90: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Voting algorithm

Quantized 2-D parameter space (P , e ) (32 lev-

An image point (row, C O ~ ) can be votes for the els for P, 20 levels for @)

set of road location:

(P , e ) = (COZ + (row - rhorizon ) x tan8,o)

Each pixel casts votes proportional to the road/nQn-road classification confidence.

The maximum in ( p , e ) space is the reported road location.

6

Page 91: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 92: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 93: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

.. Red operator Reduce - - -

Green b Edge -

- - Red -

n 4 Yrocessing cvcle U d

Color classification

Texture image H Color I

Texture classifka tion

statistics u 1 Grass Classification Fl combination Sample

colors 1 Grass 1 1-0 Grass 0

Road 1 I,, edges I I

i-' Road 0 1

r Highest Hough

space vote -

Page 94: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Simple calibration procedure Measure the distance between rows ri and the vehicle.

Detected road

Distance between Qi and the center of the vehcle =

(Ci - ci)/ppmi

ppmi = pixels per meter at row ri.

Page 95: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Dynamic road following

Statistics from

Updated statistics and road location

Vehicle steering

Predicted vehicle position (t seconds later)

0

Page 96: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 97: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Some lessons from road following

More accurate calibration is needed.

More dynamic range in the color cameras is required to handled a wide range of con-

* * . ditions. More detailed road model (intersections, curved roads..).

More flexible color classification.

7

Page 98: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Range data

Sensor:

Time-of-flight laser range finder (ERIM) 64 x 256 range images, 30° x 80°

Reflectance image

%bit range from 0 to 64 feet (3 inches res- olution)

Purpose: -

Obstacle detection f- path planning around discrete objects

Terrain modeling +- path planning across open terrain

3-D map building c- exploration, incremen- tal description improvement

1

Page 99: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's
Page 100: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Sensor Obstacle detection

I/ X Ranrreimaee

Discrete elevation map

Polygonal map of obstacles

Page 101: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

3-D map building

Shadow

I

Image 1

Motion T

Image 2

I I +&- Prediction window from T and image 1

Page 102: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Range processing cycle

image

Elevation map

Current map

Obstacle map

Vehicle

Terrain regions

Page 103: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Terrain modeling

The part of the field of view that is not part of any object can be segmented into regions: 1. Find edges in the elevation map

2. Find seed points in the elevation map

3. Apply region growing based on smoothness constraint

4. Report the region (e.g. as polygons . - . .

2

Page 104: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Putting everything together

IT< Perception modules L K

Driving u&s/” Steering commands

Re-loaded map

Page 105: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Additions to the current system

Uncertainty representation for road location, vehicle position, and objects’ locations. Up- dating through filtering

Better path planner, including off-road ca- pabilities

0 Improved vision: two cameras, intersections..

Map revision . . . . . . . . _. . . . . . . .

1

Page 106: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

Years left Device Difficulty

0 Toys cost 1 12 Watchdog re liabi I ity (Brooks, Durrant-White) ?? Factory transport planned motion 1 (Crowley) Industrial cleaning coverage 1 O(Chatil1a)

73 a . Wheelchair plan ning, I iabi lity 1993 (Harmon) Tank simulator rough terrain 77 e . Mine sweeper coverage 5 (Somalvico) 30 (Binford) 10 (Chang) Street sweeper traffic

Household servant tough manipulation

Mail delivery manipulation Garbage collection manipulation

77 . . Tank weapons 10 (Harmon) Construction force (Somalvico) 7 (Graefe) Chauffeur psychology

2

Page 107: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

7

A partial list of research efforts in mobile robotics (from J.L. Crowley, "The State of

the Art in Mobile Robotics") LAAS-C~W, T. Ave. Colonel Roche, 31077 Toulouse, France. Project Hilare, Principle investigators: George Giralt, Raja Chatila, J-P. Laumond. Perhaps one of the longest running projeczs. Reccnt applications include a cleaning robot for the Paris Metro as well as a ware-house robot

SRI-Internationat, AI Group, 333 Ravenswood Ave., Menlo Park, California, 94025 USA. Principle Investigators: Stan Rosenschein and Leslie Kaibling. Robot hardware: Flakey the robot, designed and built by Stan Reifle.

C-MU Robotics Institute, Schenley Park, Pittsburgh Pa, I52 13 USA. Principle investigators: Takeo Kana&, Chuck Thorpe, Hans Moravec, Red Whittaker. At least 5 distinct projects have been performed at C-MU in the last 5 years. Major projects include The ALV NavLab, the Terragator, the Denning surveillance robot, the W, and Moravec's Pluto, Neptune and Uranus.

MIT AI Lab: 545 Technology Square, Cambridge MA, 02139, Principle Investigator. Rod Brooks. Robots: Allen, Torn, Jerry, Sydney, Seymour.

. .

INRIA: Domaine de Voluceau, Rocquenco- BP 105, 78i53 Le Chesnay, France. Principal Investigators: Olivier Faugeras, Nicholas Ayache, Francis Lustman. A commercial copy of the INRIA robot is sold by RobotSoft SARL, of Asniers France.

LIFIA: W G , 46 ave Felix Viallet, 38031 Grcnoble, France. Principal Investigator J i m Crowley. Developing a Surveillance Robot for project EUREKA - Mithra. Currently using a Denning robot named Lurch.

GM Research, Warren Michigan, USA: Principle Investigators: Steve Holland, Bob Tilove

Univ. of Amsterdam, Kmslaan 4090, The Netherlands. Principle Investigator Dr. Willem Duinker.

Stanford University: AI Laboratory, Computer Science Dept, Stanford University, Stanford, Ca, 94305 USA. Principal Investigator: Thomas Binford.

. .

ORNL: P.O. Box X, Oak Ridge Tenn, 37831 USA. Principle Investigator: Charles Weis bin.

FMC Corp. 1205 Coleman Ave., Santa Clara, Ca 95052. Principle Investigator Andrew Chang Military applications of mobile robots.

NBS: Inausmal Systems Division, National Bureau of Standards, Building 220/B 124, Gaithersberg, MD. 20899. prinicipal Investigator: Marty Herman.

Insitiit der Bundeswehr MQnchen, Inst fur Nfesstechnik, 8014 Neubiberg, W. Germany. Principle Investigator: Volker Graefe. Real time road-following system inte,mting simple real time vision with control theory.

Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, Ca. 9 1 109 USA. Principle Investigator Brian Wilcox.

Page 108: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

2

References

R.C. Arkin. E. Riseman, A. Hanson. Visual strategies for mobile robot navigation. In Proc. Workshop on Computer Vision. 1987.

N. Ayache, F. Lusunan. Fast and reliable trinocular stexeovision. In Proc. ICCV'87. 1987.

A. Bergman, C. K. Cowan. Noise-Tolerant Range Analysis for Autonomous Navigation. In IEEE Conf. on Robotics and Automarion. San Francisco, 1986.

R. Brooks. Aspects of mobile robot visual map making. In Second International Robotics Research Symposium. MlT press, 1985.

R. Brooks, AM. Flynn, T. Marill. Self calibration of motion and stefeo vision for mobile robot navigation. In Proc. Image Understanding Workshop. Cambridge, 1988.

M.K. Brown. Locating object surfaces with an ultrasonic range sensor. In Proc. IEEE International Conference on Robotics and Automation. St Louis, 1985.

S.S. Chen. Multisensor fusion and navigation of mobile robots. International Jownal Intelligent Systems, Vol. 2 , 1987.

J. Crowley. Dynamic world modeling for an intelligent mobile robot using a rotating ultrasonic ranging device. In Proc. IEEE International Conference OR Robotics and A1Uomotion. St Louis, 1985.

M. J. Daily, J. G. Harris, K. Reiser. Detecting Obstacles in Range Imagery. In Image Understanding Workshop. Los Angeles, 1987.

MJ. Daily, J.G.Harris,K. Reiser. An operational perception system for cross-country navigation. In Proc. Image Understanding Workshop. Cambridge, 1988.

E.D. Dickmanns, A. Zapp. A curvature-based scheme for improving road vehicle guidance by computer vision. In Proc. SPIE Conference 727 on Mobile Robots. Cambridge, 1986.

E.D. Dickmanns, A. Zapp. Autonomous high-speed road vehicle guidance by computer vision. In Proc. 10th IFAC. Munich, 1987.

K.C. Drake, E.S. McVey, R.M. Tingo. Experimental position and rangmg results for mobile robots. Jownal of Robotics and Automation, Vol. 3 ,1987.

M. Drumheller. Mobile robot lacalization using sonar.

R. T. Dunlay, D. G. Morgenthaler. Obstacle detection and avoidance from range data. In Proc. SPIE Mobile Robots Conference. Cambridge, MA, 1986.

PtMI-9,1987.

Page 109: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

3

~ 7 1

A. Elfes. Sonar-based real-world mapping and navigation. Journal of Robotics and Automation, Vol. 3 , 1987.

O.D. Faugeras, F. Lustman, G. Toscani. Motion and structure from motion from point and line matches. In Proc. I C W 8 7 . 1987.

O.D. Faugeras, N. Ayache, B. Faverjon. Building visual maps by combining noisy stereo measurements. In Proc. IEEE Conf. on Robotics and Automation. 1986.

G. Gimlf R. Chatila, M. Vaisset. An integrated navigation and motion control system for autonomous multisensory mobile robots. In Proc. 1st International Symposium Robotics Research. Cambridge, 1984.

K.D. Gremban, C.E. Thorpe, T. Kanade. Geomemc camera calibration using systems of linear equations. In Proc. Image Understanding Workshop. Cambridge, 1988.

S.Y. Harmon. The ground surveillance robot (GSR): An autonomous vehicle designed to transit unknown terrain. Journal of Robotics and Automation, Vol. 3 , 1987.

M. Hebert, T. Kanade. First results on outdoor scene analysis. In Proc. IEEE Robotics and Aufomotion. San Francisco, 1985.

M. Hebert T. Kanade. 3-D vision for outdoor navigation by an autonomous vehicle. In Proc. Image Understanding Workshop. Cambridge, 1988.

T. Kanade, C. Thorpe, W. Whittaker. Autonomous Land Vehicle at CMU. In Proc. ACM Computer Conference. Cincinnati, February, 1986.

D J. Knegman, T.O. Binford. Generic models for robot navigation. In Proc. Image Understanding Workshop. Cambridge, 1988.

D. Knegman, E. Triendl, T.O. Binford. A mobile robot: sensing, planning and locomotion. In Proc. IEEE Con5 on Robotics and Automation. 1987.

D. Kuan, G. Phipps, A.C. Hsueh. Autonomous land vehicle road following. In Proc. ICCV. 1987.

R. Kuc, M.W. Siegel. Physically based simulation model for acoustic sensor robot navigation.

S.P. Liou, R.C. Jain. Road following using vanishing points. Compufer Vision, Graphics, and Image Processing , 1987.

L. Matrhies, S. A. Shafer. Error Modelling in Stereo Navigation. Technical Report CMU-RI-TR-86- 140, Carnegie-Mellon University, the Robotics Institute, 1986.

L. Matthies, S.A. Shafer. Error modeling in stereo navigation. Journal of Robotics and Automation, Vol. 3 , 1987.

PAMI-9 , 1987.

Page 110: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

4

1341

[381

1401

1411

1431

[441

L. Matthies, R. Szeliski, T. Kanade. Kalman filter-based algorithms for estimating depth from image sequences. In Proc. Image Understanding Workshop. Cambridge, 1988.

D. Miller. A spaual representation system for mobile robots. In Proc. IEEE International Conference on Robotics and Automation. St Louis, 1985.

H.P. Moravec, A. Elfes. High-resolution maps from wide angle sonar. In Proc. IEEE International Cogerence on Robotics and Automation. St Louis, 1985.

R.C. Nelson, J. Aloirnonos. Using flow field divergence for obstacle avoidance in visual navigation. In Proc. Image Understanding Workshop. Cambridge, 1988.

K. E. Oh, F. M. Vilnroaer, M. J. Daily, K. Reiser. Developments in Knowledge-Based Vision for Obsracie Detection and Avoidance. In Image Understanding Workshop. Los Angeles, 1987.

J.L. Olivier, F. Ozguner. A navigation algorithm for an intelligent vehicle with a laser rangefinder. In Proc. IEEE Conf. on Robotics and Automation. San Francisco, 1986.

S.V. Rao, S .S. Iyengar, C.C. Jorgenson, CX. Weisbin. Concurrent algorithms for autonomous robot navigation in an unexplored terrain. In Proc. IEEE Cog. on Robotics and Automation. San Francisco, 1986.

AX. de Saint Vincent. A 3D perception system for the mobile robot HILARE. In Proc. IEEE Conf. on Robotics and Automation. San Francisco, 1986.

K. Sakwai, H. Zen, H. Ohm Y. Ushioda, S. Ozawa. Analysis of a road image as seen from a vehicle. In Proc. ICCV. 1987.

S. A. Shafer, A. Stem, C. E. Thorpe. An Architecture for Sensor Fusion in a Mobile Robot. Technical Report CMU-RI-TR-86-9, Carnegie-Mellon University, the Robotics Institute, 1986.

R.C. Smith, P. Cheeseman. On the representation and estimation of spatial uncertainty. International Journal of Robotics Research , 1986.

W.K. Stewart. A non-deterministic approach to 3-D modeling underwater. In Proc. Fifth International Symposium on Unmanned Untethered Submersible. University of New

Hampshire, 1987.

C.E. Thorpe. The CMU rover and the FIDO vision and navigation system. PhD thesis, Camegie-mellon University, 1984.

C.E. Thorpe, M. Hebert, T. Kanade, S.A. Shafer. Vision and navigation for the Camegie-Mellon Navlab. PAMI lO(3). 1988.

D. Y. Tseng, M. J. Daily, K. E. Oh, K. Reiser, F. M. Vilnrotter. Knowledge-based vision techniques annual technical report. Technical Report ETL-0431, U.S. Army ETL, Fort Belvoir, VA. 1986.

S. Tsuji. J.Y. Zheng. Visual path planning for a mobile robot. In Proc. IJCAI. 1987.

Page 111: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's

4 5

[48] R. Wallace, K. Matsuzaki, Y. Goto, Y. Webb, J. Crisman, T. Kanade. Progress on Robot Road Following. In IEEE Con$ on Robotics and Automation. San Francisco, 1986. A.M. Waxman, JJ. LeMoigne, L.S. Davis, B. Srinivasan, T.R. Kushner, E. Liang. T. Siddalingaiah. A visual navigation system for auatonomous land vehicles. Journal of Robotics and Automation, Vol. 3 , 1987.

W.J. Wolfe, N. Marquina, Eds. Mobile Robots II. In Proc. SPIE 852. Cambridge, MA, 1987.

[49]

[SO]

Page 112: Computer Vision For Autonomous Navigation · Used in navigation. Stewart: Underwater vehicle with sonar. Full 3-D occupancy map built from many images registered by using the vehicle's