overview paper vision-based intelligent vehicles: state of

16
Robotics and Autonomous Systems 32 (2000) 1–16 Overview Paper Vision-based intelligent vehicles: State of the art and perspectives q Massimo Bertozzi a,* , Alberto Broggi b , Alessandra Fascioli a a Dipartimento di Ingegneria dell’Informazione, Università di Parma, I-43100 Parma, Italy b Dipartimento di Informatica e Sistemistica, Università di Pavia, I-27100 Pavia, Italy Received 31 March 1999; received in revised form 1 October 1999 Communicated by F.C.A. Groen Abstract Recently, a large emphasis has been devoted to Automatic Vehicle Guidance since the automation of driving tasks carries a large number of benefits, such as the optimization of the use of transport infrastructures, the improvement of mobility, the minimization of risks, travel time, and energy consumption. This paper surveys the most common approaches to the challenging task of Autonomous Road Following reviewing the most promising experimental solutions and prototypes developed worldwide using AI techniques to perceive the environmental situation by means of artificial vision. The most interesting results and trends in this field as well as the perspectives on the evolution of intelligent vehicles in the next decades are also sketched out. © 2000 Elsevier Science B.V. All rights reserved. Keywords: Intelligent transportation systems (ITS); Automatic vehicle guidance; Intelligent vehicles; Machine vision 1. Introduction In the last decades in the field of transportation sys- tems a large emphasis has been given to issues such as improving safety conditions, optimizing the exploita- tion of transport networks, reduce energy consump- tion and preserving the environment from pollution. The endeavors in solving these problems have trig- gered the interest towards a new field of research and application, automatic vehicle driving, in which new q This research has been partially supported by the Italian Na- tional Research Council (CNR) under the frame of the Progetto Finalizzato Trasporti 2 and Progetto Madess 2. * Corresponding author. E-mail addresses: [email protected] (M. Bertozzi), [email protected] (A. Broggi), [email protected] (A. Fascioli) techniques are investigated for the entire or partial automation of driving tasks. These tasks include fol- lowing the road and keeping within the correct lane, maintaining a safe distance among vehicles, regulat- ing the vehicle’s speed according to traffic conditions and road characteristics, moving across lanes in order to overtake vehicles and avoid obstacles, finding the shortest route to a destination, and moving and park- ing within urban environments. The interest in intelligent transportation systems (ITS) technologies was born about 20 years ago, when the problem of people and goods mobility be- gan to arise, fostering the search for new alternative solutions. Automatic vehicle driving, intelligent route planning, and other extremely high-level function- alities were selected as main goals. Governmental 0921-8890/00/$ – see front matter © 2000 Elsevier Science B.V. All rights reserved. PII:S0921-8890(99)00125-6

Upload: others

Post on 04-Dec-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Overview Paper Vision-based intelligent vehicles: State of

Robotics and Autonomous Systems 32 (2000) 1–16

Overview Paper

Vision-based intelligent vehicles: State of the art and perspectivesq

Massimo Bertozzia,∗, Alberto Broggib, Alessandra Fasciolia

a Dipartimento di Ingegneria dell’Informazione, Università di Parma, I-43100 Parma, Italyb Dipartimento di Informatica e Sistemistica, Università di Pavia, I-27100 Pavia, Italy

Received 31 March 1999; received in revised form 1 October 1999Communicated by F.C.A. Groen

Abstract

Recently, a large emphasis has been devoted to Automatic Vehicle Guidance since the automation of driving tasks carriesa large number of benefits, such as the optimization of the use of transport infrastructures, the improvement of mobility, theminimization of risks, travel time, and energy consumption.

This paper surveys the most common approaches to the challenging task of Autonomous Road Following reviewing the mostpromising experimental solutions and prototypes developed worldwide using AI techniques to perceive the environmentalsituation by means of artificial vision.

The most interesting results and trends in this field as well as the perspectives on the evolution of intelligent vehicles in thenext decades are also sketched out. © 2000 Elsevier Science B.V. All rights reserved.

Keywords:Intelligent transportation systems (ITS); Automatic vehicle guidance; Intelligent vehicles; Machine vision

1. Introduction

In the last decades in the field of transportation sys-tems a large emphasis has been given to issues such asimproving safety conditions, optimizing the exploita-tion of transport networks, reduce energy consump-tion and preserving the environment from pollution.The endeavors in solving these problems have trig-gered the interest towards a new field of research andapplication, automatic vehicle driving, in which new

qThis research has been partially supported by the Italian Na-tional Research Council (CNR) under the frame of the ProgettoFinalizzato Trasporti 2 and Progetto Madess 2.

∗Corresponding author.E-mail addresses:[email protected] (M. Bertozzi),[email protected] (A. Broggi), [email protected] (A. Fascioli)

techniques are investigated for the entire or partialautomation of driving tasks. These tasks include fol-lowing the road and keeping within the correct lane,maintaining a safe distance among vehicles, regulat-ing the vehicle’s speed according to traffic conditionsand road characteristics, moving across lanes in orderto overtake vehicles and avoid obstacles, finding theshortest route to a destination, and moving and park-ing within urban environments.

The interest inintelligent transportation systems(ITS) technologies was born about 20 years ago,when the problem of people and goods mobility be-gan to arise, fostering the search for new alternativesolutions. Automatic vehicle driving, intelligent routeplanning, and other extremely high-level function-alities were selected as main goals. Governmental

0921-8890/00/$ – see front matter © 2000 Elsevier Science B.V. All rights reserved.PII: S0921-8890(99)00125-6

Page 2: Overview Paper Vision-based intelligent vehicles: State of

2 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

institutions activated this initial explorative phase bymeans of various projects worldwide, involving alarge number of research units who worked in a coop-erative way producing several prototypes and possiblesolutions, all based on rather different approaches.

In Europe the PROMETHEUS project (PROgraMfor a European Traffic with Highest Efficiency andUnprecedented Safety) started this explorative stagein 1986. The project involved more than 13 vehiclemanufacturers and several research units from gov-ernments and universities of 19 European countries.Within this framework, a number of different ap-proaches regarding ITS were conceived, implemented,and demonstrated.

In the United States a great deal of initiatives werelaunched to deal with the mobility problem, involvingmany universities, research centers, and automobilecompanies. After this pilot phase, in 1995 the USgovernment established the National AutomatedHighway System Consortium (NAHSC) [2].

Also in Japan, where the mobility problem is muchmore intense and evident, some vehicle prototypeswere developed within the framework of differentprojects. Similarly to what happened in the US, in1996 the Advanced Cruise-Assist Highway SystemResearch Association (AHSRA) was establishedamongst a large number of automobile industriesand research centers [31], which developed differ-ent approaches to the problem of Automatic VehicleGuidance.

The main results of this first stage were a deep anal-ysis of the problem and the development of a feasibil-ity study to understand the requirements and possibleeffects of the application of ITS technology.

The field of ITS is now entering its second phasecharacterized by a maturity in its approaches and bynew technological possibilities which allow the devel-opment of the first experimental products. A numberof prototypes of intelligent vehicles have been de-signed, implemented, and tested on the road. Thedesign of these prototypes was preceded by the analy-sis of solutions deriving from similar and close fieldsof research, and exploded with a great flourishing ofnew ideas, innovative approaches, and novel ad hocsolutions. Robotics, artificial intelligence, computerscience, computer architectures, telecommunications,control and automation, signal processing are justsome of the principal research areas from which the

main ideas and solutions were first derived. Initially,underlying technological devices — such as head-updisplays, infrared cameras, radars, sonars — derivedfrom expensive military applications, but, thanks tothe increased interest in these applications and to theprogress of industrial production, today’s technologyoffers sensors, processing systems, and output devicesat very competitive prices. In order to test a widespectrum of diverse approaches, these prototypes ofautomatic vehicles are equipped with a large numberof different sensors and computing engines.

Section 2 of this paper describes the motivationswhich underlie the development of vision-based intel-ligent vehicles, and illustrates their requirements andpeculiarities. Section 3 surveys the most common ap-proaches to autonomous Road Following developedworldwide, while Section 4 ends the paper outlin-ing our perspectives in the evolution of intelligentvehicles.

2. Vision-based intelligent vehicles

2.1. Improving vehicles or infrastructures?

Automatic driving functionalities can be achievedacting on infrastructures and vehicles. Depending onthe specific application, either choice possesses ad-vantages and drawbacks. Enhancing road infrastruc-ture may yield benefits to those kind of transportationwhich are based on repetitive and prescheduled routes,such as public transportation and industrial robotics.On the other hand, it requires a complex and exten-sive organization and maintenance which can becomecumbersome and extremely expensive in case of ex-tended road networks for private vehicles use. An adhoc structuring of the environment can only be con-sidered for a reduced subset of the road network, e.g.,for building a fully automated highway on which onlyautomatic vehicles, public or private, can drive.

For this reason, the systems that are expectedto be achieved on a short-term basis can only bevehicle-autonomous or at the most, those systems canexploit already existing traffic infrastructures.

In this review only in-vehicle applications areconsidered, while road infrastructure, inter-vehiclecommunication, satellite communication, and routeplanning issues are not covered.

Page 3: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 3

Any on-board system for ITS applications needs tomeet some important requirements:• The final system that will be installed on a com-

mercial vehicle must be robust enough to adapt todifferent conditions and changes of environment,road, traffic, illumination, and weather. Moreover,the hardware system needs to be resistant tomechanical and thermal stress.

• On-board systems for ITS applications are safetycritical systems for which a high degree of reliabil-ity is required. Consequently, the project has to bethorough and rigorous during all its phases, from therequirements specification to the design and imple-mentation. An extensive phase of testing and vali-dation is therefore of paramount importance.

• For marketing reasons the design of an ITS systemhas to be driven by strict cost criteria (it shouldcost no more than 10% of the vehicle price) thusrequiring a specific engineering phase. Operativecosts (such as power consumption) need to be keptlow as well, since the vehicle’s performance shouldnot be affected by the use of ITS apparata.

• The system’s hardware and sensors have to be keptcompact in size and should not disturb car styling.

• Since ITS systems must be triggered off and con-trolled by a human driver, they need a user-friendlyman–machine interface.

2.2. Which kind of sensors should be employed?

Among the sensors widely used in indoor robotics,tactile sensors and acoustic sensors are of no use inautomotive applications because vehicles’ speed theformer ones, and their reduced detection range thelatter ones.

Laser-based sensors and millimeter-wave radars de-tect the distance of objects by measuring the traveltime of a signal emitted by the sensors themselves andreflected by the object, and are therefore classified asactive sensors. Their common principal drawbacks arethe low spatial resolution and slow scanning speed.However, millimeter-wave radars are more robust torain and fog than laser-based radars, even if moreexpensive.

Vision-based sensors are defined aspassive sen-sors and have an intrinsic advantage over laser andradar sensors: the possibility of acquiring data in anon-invasive way, thus not altering the environment.

Image scanning is performed fast enough for ITSapplications. Moreover, they can be used for somespecific applications for which visual informationplays a basic role (such as lane markings localiza-tion, traffic signs recognition, obstacle identification)without requiring any modifications to road infras-tructures. Unfortunately vision sensors are less robustthan millimeter-wave radars in foggy, night, or directsun-shine conditions.

Active sensors possess some specific peculiaritieswhich, in this specific application, result in advantagesover vision: they can measure some quantities, suchas movement, in a more direct way than vision andrequire less performing computing resources as theyacquire a considerably lower amount of data.

Notwithstanding, besides the problem of environ-ment pollution, the wide variation in reflection ratioscaused by different reasons — such as obstacles’ shapeor material — and the need for the maximum signallevel to comply with some safety rules, the main prob-lem with the use of active sensors is represented byinterference among sensors of the same type, whichcould be critical for a large number of vehicles mov-ing simultaneously in the same environment as, e.g.,in the case of autonomous vehicles traveling on intel-ligent highways.

Hence, foreseeing a massive and widespread use ofautonomous sensing agents, the use of passive sensors,such as cameras, obtains key advantages over the useof active ones.

Obviously machine vision does not extend sensingcapabilities besides human possibilities (e.g., in foggyconditions or during the night with no specific illu-mination), but can, however, help the driver in caseof failure, e.g., in the lack of concentration or drowsyconditions.

2.3. Machine vision for intelligent vehicles

Some important issues must be carefully consideredin the design of a vision system for automotive appli-cations.

In the first place, ITS systems require faster process-ing than other applications, since the vehicle speed isbounded by the processing rate. The main problem thathas to be faced when real-time imaging is concernedand which is intrinsic to the processing of images isthe large amount of data, and therefore computation

Page 4: Overview Paper Vision-based intelligent vehicles: State of

4 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

involved. As a result, specific computer architecturesand processing techniques must be devised in order toachieve real-time performance. Nevertheless, since thesuccess of ITS apparata is tightly related to their cost,the computing engines cannot be based on expensiveprocessors. Therefore, either off-the-shelf componentsor ad hoc dedicated low-cost solutions must be con-sidered.

Secondly, in the automotive field no assumptionscan be made on key parameters, e.g., the scene illu-mination or contrast, which are directly measured bythe vision sensor. Hence, the subsequent processingmust be robust enough to adapt to different environ-mental conditions (such as sun, rain, fog) and to theirdynamic changes (such as transitions between sun andshadow, or the entrance or exit from a tunnel).

Furthermore, other key issues, such as the robust-ness to vehicle’s movements and drifts in the camera’scalibration, must be handled as well.

However, recent advances in both computer andsensor technologies promote the use of machinevision also in the intelligent vehicles field. The devel-opments in computational hardware, such as a higherdegree of integration and a reduction of the power sup-ply voltage, permit to produce at an affordable pricemachines that can deliver a high computing powerwith fast networking facilities. Current technologyallows the use of SIMD-like processing paradigmseven in general-purpose processors such as the newgeneration of processors that include multimediaextensions.

In addition to this, current cameras include newimportant features that permit the solution of somebasic problems directly at sensor level. For example,image stabilization can be performed during acquisi-tion, while the extension of camera dynamics allowsto avoid the processing required to adapt the acquisi-tion parameters to specific light conditions. The res-olution of the sensors has been drastically enhanced,and in order to decrease the acquisition and trans-fer time, new technological solutions can be foundin CMOS sensors, such as the possibility of dealingwith pixels independently as in traditional memories.Another key advantage of CMOS-based sensors isthat their integration on the processing chip seems tobe straightforward.

Many different parameters must be evaluated forthe design and choice of an image acquisition device.

First of all, some parameters tightly coupled with thealgorithms regard the choice of monocular vs binoc-ular (stereo) vision and the sensors’ angle of view(some systems adopt a multi-camera approach, by us-ing more than one camera with different viewing an-gles, e.g., fish eye or zoom). The resolution and thedepth (number of bits/pixel) of the images have to beselected as well (this also includes the selection ofcolor vs monochrome images).

Other parameters — intrinsic to the sensor — mustbe considered. Although the frame rate is generallyfixed for CCD-based devices (25 or 30 Hz), the dynam-ics of the sensor is of basic importance: conventionalcameras allow an intensity contrast of 500:1 withinthe same image frame, while most ITS applicationsrequire a 10,000:1 dynamic range for each frame and100,000:1 for a short image sequence. Different ap-proaches have been studied to meet this requirement,ranging from the use of CMOS-based cameras witha logarithmically compressed dynamic [27,29] to theinterpolation and superimposition regarding values oftwo subsequent images taken from the same camera[23].

In conclusion, although extremely complex andhighly demanding, thanks to the great deal of informa-tion it can deliver (it has been estimated that humansperceive visually about 90% of the environment in-formation required for driving), computer vision is apowerful means for sensing the environment and hasbeen widely employed to deal with a large number oftasks in the automotive field.

3. Automatic Road Following: An overview of theapproaches

Among the complex and challenging tasks thatreceived most attention in Automatic Vehicle Guid-ance isRoad Following. It is based onlane detection(which includes the localization of the road, the de-termination of the relative position between vehicleand road, and the analysis of the vehicle’s headingdirection), andobstacle detection(which is mainlybased on localizing possible obstacles on the vehicle’spath).

In this section, a survey on the most commonapproaches to Road Following is presented, focusingin particular on vision-based systems.

Page 5: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 5

3.1. Lane detection

In most prototypes of autonomous vehicles devel-oped worldwide, lane following is divided into thefollowing two steps: initially, the relative position ofthe vehicle with respect to the lane is computed andthen actuators are driven to keep the vehicle in the cor-rect position. Some examples of the strategies whichmay be adopted in this vision-based lateral controlproblem are discussed in [30].

Conversely, some early systems were not based onthe preliminary detection of the road’s position, butobtained the commands to be issued to the actuators(steering wheel angles) directly from visual patternsdetected in the incoming images. As an example, theALVINN (Autonomous Land Vehicle In a Neural Net)system is based on a neural net approach: it is able tofollow the road after a training phase with a large setof images [16].

Nevertheless, since the knowledge of the lane po-sition can be conveniently exploited by other driv-ing assistance functions, the localization of the lane isgenerally performed.

A few systems have been designed to handle com-pletely unstructured roads, e.g., the SCARF (Super-vised Classification Applied to Road Following [9])and PVR III (POSTECH Road Vehicle [17]) systemsare based on the use of color cameras and exploit theassumption of a homogeneously colored road to ex-tract the road region from the images.

More generally, however, lane detection has beenreduced to the localization of specific features such asmarkings painted on the road surface. This restrictioneases the detection of the road, nevertheless two basicproblems must be faced.• The presence ofshadows(projected by trees, build-

ings, bridges, or other vehicles) produces artifactsonto the road surface, and thus alters the road tex-ture.

Most research groups face this problem usinghighly sophisticated image filtering algorithms. Forthe most part, gray-level images are used, but insome cases color images are used: this is the caseof the MOSFET (Michigan Off-road Sensor FusingExperimental Testbed) autonomous vehicle whichuses a color segmentation algorithm that maxi-mizes the contrast between lane markings and road[22].

• Other vehicleson the path partly occlude thevisibility of the road and therefore also of roadmarkings.

To cope with this problem, some systems havebeen designed to investigate only a small portion ofthe road ahead of the vehicle where the absence ofother vehicles can be assumed. As an example, theLAKE and SAVE autonomous vehicles rely on theprocessing of the image portion corresponding tothe nearest 12 m of road ahead of the vehicle, and ithas been demonstrated that this approach is able tosafely maneuver the vehicle on highways and evenon belt ways or ramps with a bending radius downto 50 m [8].

The RALPH (Rapidly Adapting Lateral PositionHandler) system, instead, reduces the portion of theimage to be processed according to the result of aradar-based obstacle detection module [24].

In other systems the area in which lane mark-ings are to be looked for is determined first. Theresearch group of the Laboratoire Régional desPonts-et-Chaussées de Strasbourg exploits the as-sumption that there should always be a chromaticcontrast between road and off-road (or obstacles),at least in one color component; to separate thecomponents, the concept ofchromatic saturationisused [7].Since lane detection is generally based on the lo-

calization ofspecific patterns(lane markings), it canbe performed with the analysis of asinglestill image.In addition, some assumptions can aid the detectionand/or speed-up the processing.• Due to both physical and continuity constraints, the

processing of the whole image can be replaced bythe analysis of specificregions of interestonly (theso-calledfocus of attention), in which the featuresof interest are more likely to be found. This is a gen-erally followed strategy that can be adopted assum-ing a priori knowledge on the road environment.

For example the system developed by the RobertBosch GmbH research group employs a model bothfor the road and the vehicle’s dynamic to determinethe portion of the road where it is most likely tofind lane markings [13].

• The assumption of afixed or smoothly varying lanewidth allows the enhancement of the search cri-terion, limiting the search to almost parallel lanemarkings.

Page 6: Overview Paper Vision-based intelligent vehicles: State of

6 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

As an example, on the PVR III vehicle, lanemarkings can be detected using both neural net-works and simple vision algorithms: two parallelstripes of the acquired image are selected and fil-tered using Gaussian masks and zero crossing tofind vertical edges. The result is matched againsta given model (a typical road pattern with parallellane markings) to compute a steering angle and afitnessevaluation indicating the confidence in theresult [17].

Analogously, the RALPH system is based on theprocessing of the image portion corresponding tothe road about 20–70 m ahead of the vehicle, de-pending on the vehicle’s speed and obstacles pres-ence. The perspective effect is removed from thisportion of the image and the determination of thecurvature is carried out according to a number ofpossible curvature models for a specific road tem-plate featuring parallel road markings [24].

• The reconstruction of road geometry can be simpli-fied by assumptions on itsshape.

The research groups of the Universität der Bun-deswehr [20] and Daimler-Benz [12] base theirroad detection functionality on a specific roadmodel: lane markings are modeled asclothoids. Ina clothoid the curvature depends linearly on thecurvilinear reference. This model has the advantagethat the knowledge of only two parameters allowsthe full localization of lane markings and the com-putation of other parameters like the lateral offsetwithin the lane, the lateral speed with respect tothe lane, and the steering angle.

Other research groups use a polynomial rep-resentation for lane markings. In the MOSFETautonomous vehicle, for instance, lane markingsare modeled as parabolas [22]. A simplified Houghtransform is used to accomplish the fitting proce-dure.

Similarly, the lane detection system developedat The Ohio State University Center for IntelligentTransportation Research relies on a polynomialcurve [25]. It assumes a flat road with either con-tinuous or dashed bright lane markings. The historyof previously located lane markings is used to de-termine the region of interest, thus reducing theportion of the image to be processed. The algo-rithm extracts the significant bright regions fromthe image plane and stores them in a vector list.

Qualitative parameters such as the convergence ofthe lines at infinity or the lane width, known orestimated, are used to extract from the list the can-didate lane markings. Finally, in order to handlealso dashed lines, a low-order polynomial curve isfitted to the computed vectors.

On the contrary, other systems adopt a moregeneric model for the road. The ROMA vision-basedsystem uses a contour-based method [26]. A dy-namic road model permits the processing of smallportions of the acquired images therefore enablingreal-time performance. Actually, only straight orsmall curved roads without intersections are in-cluded in this model. Images are processed usinga gradient-based filter and a programmable thresh-old. The road model is used to follow contoursformed by pixels that feature a significant gradientdirection value.

Analogously, a generic triangular road modelhas been used on the MOB-LAB experimental ve-hicle by the research groups of the Università diParma [3] and Istituto Elettrotecnico Nazionale “G.Ferraris”, CNR, Italy [10].

• The knowledge of the specific camera calibrationtogether with the assumption of ana priori knowl-edge on the road surface/slope(i.e., a flat roadwithout bumps) can be exploited to simplify themapping between image pixels and their corre-spondent world coordinates.

The great part of the previously discussed sys-tems exploit the assumption of a flat road in frontof the vehicle in the determination of obstacle dis-tance or road curvature, once the specific features ofinterest have been localized in the acquired image.The GOLD (Generic Obstacle and Lane Detection)system [1] implemented on the ARGO autonomousvehicle and the already mentioned RALPH system,however, exploit this assumption also in the lanedetermination process. In fact, in both cases thelane markings detection is performed in a differentimage domain, representing a bird’s eye view ofthe road, which can be obtained thanks to the flatroad assumption.Table 1 summarizes the pros and cons of the as-

sumptions on which the most common approaches tolane detection rely.

When the processing is aimed not only at a merelane detection but also at the tracking of lane markings,

Page 7: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 7

Table 1Pros and cons of the most typical assumptions in lane detection

Pros Cons

Focus of attention Analysis of a small image portion, fastprocessing, real-time performance, lowcost hardware

Does not fit large features, choice ofthe region of interest is critical

Fixed lane width Enhancement of the search criterion (parallellane markings), robustness to shadowsand occlusions

Does not match roads with variablelane width

Road shape Strong model robust w.r.t. shadows and oc-clusions, eases road geometry reconstruc-tion, simplifies vehicle control

Requires fitting with complex equa-tions, high computational powerneeded, fails if the road doesnot match the model

A priori knowledge on the road surface/slope Simplifies mapping between image pixelsand world coordinates (e.g., determinationof obstacle distance, road curvature)

Hypotheses (e.g., flat road) are notalways met in real cases, approxi-mations or recalibration needed

the temporal correlation between consecutive framescan be used either to ease the feature determination orto validate the result of the processing. The lane de-tection module implemented and tested on the ARGOvehicle falls in the first case as it restricts the imageportion to be analyzed to the nearest neighborhood ofthe markings previously detected [5]. In a differentmanner, the lane detection module developed by theresearch group of the Istituto Elettrotecnico Nazionale“G. Ferraris”, once lane markings are found by meansof a triangular model, uses the result of previous com-putations to validate the current one [10].

Fig. 1. Depending on the definition of obstacle, different techniques are used.

3.2. Obstacle detection

The criteria used for the detection of obstacles de-pend on the definition of what anobstacle is (seeFig. 1). In some systems the determination of obsta-cles is limited to the localization ofvehicles, which isthen based on a search for specific patterns, possiblysupported by other features, such as shape, symmetry,or the use of a bounding box.

The researchers of the Istituto ElettrotecnicoNazionale “G. Ferraris” limit the processing to theimage portion that is assumed to represent the road,

Page 8: Overview Paper Vision-based intelligent vehicles: State of

8 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

thus relying on the previously discussed lane detec-tion module. This area of the image is analyzed andborders that could represent a potential vehicle arelooked for and examined [10].

Conversely, the obstacle detection algorithm devel-oped at the Universität der Bundeswehr is based onboth an edge detection process and an obstacle mod-elization; the system is able to detect and track upto 12 objects around the vehicle. The obstacle vari-ables continuously updated are: distance, direction,relative speed, relative acceleration, lateral position,lateral speed, and size [20].

When obstacle detection is limited to the localiza-tion of specific patterns, as in the previous two exam-ples, the processing can be based on the analysis of asingle still image, but the approach is not successfulwhen an obstacle does not match the model.

A more general definition of obstacle, which ob-viously leads to more complex algorithmic solutions,identifies as an obstacle any object that obstructs thevehicle’s driving path or, in other words, anything ris-ing out significantly from the road’s surface. In thiscase, obstacle detection is reduced to identifying thefree space(the area in which the vehicle can safelymove) instead of recognizing specific patterns.

Due to the general applicability of this definition,the problem is dealt with using more complex tech-niques; the most common ones are based on the pro-cessing of two or more images, such as• the analysis of theoptical flowfield and• the processing ofnon-monocularimages.

In the first case more than one image is acquiredby the same sensor in different time instants, whilstin the second one multiple cameras acquire images si-multaneously, but from different points of view. Be-sides their intrinsic higher computational complexitycaused by a significant increment in the amount of datato be processed, these techniques must also be robustenough to tolerate noise caused by vehicle movementsand drifts in the calibration of the multiple cameras’setup.

Theoptical flow-based technique requires the anal-ysis of a sequence of two or more images: a 2Dvector is computed in the image domain, encodingthe horizontal and vertical components of the veloc-ity of each pixel. The result can be used to computeego-motion, which in some systems are directly ex-tracted from odometry; obstacles can be detected by

analyzing the difference between the expected and realvelocity fields.

As an example, the ROMA system integrates anobstacle detection module that is based on the useof an optical flow technique in conjunction with datacoming from an odometer [19].

Similarly, the ASSET-2 (A Scene SegmenterEstablishing Tracking v2) is a complete real-timevision system for segmentation and tracking of in-dependently moving objects. Its main feature is thatit does not require any camera calibration. It tracksdown objects and is capable of correctly handling oc-clusions amongst obstacles, and automatically tracksdown each new object that enters the scene. ASSET-2initially builds a sparse image flow field and thensegmentates it into clusters that feature homogeneousflow variation. Temporal correlation is used to filterthe result, therefore improving accuracy [28].

On the other hand, the processing ofnon-monocularimage sets requires identifying correspondences be-tween pixels in the different images: two images, inthe case ofstereo vision, and three images, in the caseof trinocular vision. The advantage of analyzing stereoimages instead of a monocular sequence lies in thepossibility of directly detecting the presence of obsta-cles, which, in the case of an optical flow-based ap-proach, is indirectly derived from the analysis of thevelocity field. Moreover, in a limit condition whereboth vehicle and obstacles have small or null speeds,the optical flow-based approach fails while the othercan still detect obstacles.

The UTA project (Urban Traffic Assistant) of theDaimler-Benz research group, e.g., aims at an intel-ligent stop and go for inner-city traffic using stereovision and obtaining 3D information in real-time. Inaddition, the UTA demonstrator is able to recognizetraffic signs, traffic lights and walking pedestrians aswell as the lane, zebra crossings, and stop lines [12].

Also the Massachusetts Institute of Technologygroup developed a cost-effective stereo vision sys-tem. The system is used for three-dimensional lanedetection and traffic monitoring as well as for otheron-vehicle applications. The system is able to sep-arate partially overlapped vehicles and distinguishthem from shadows.

Furthermore, to decrease the intrinsic complexityof stereo vision, some domain-specific constraints aregenerally adopted.

Page 9: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 9

Table 2Comparison of the different approaches to obstacle detection

Pros Cons

Analysis of asingle image

Simple algorithmic solutions, fast processing, does notsuffer from vehicle movements

Loss of info about scene’s depth unless specificassumptions are made, not successful when obstaclesdo not match the model

Optical flow Detection of generic objects, allows computation ofego-motion and obstacles’ relative speed

Computationally complex, sensitive to vehiclemovements and drifts in calibration, fails when bothvehicle and obstacle have small or null speed

Stereo vision Detection of generic objects, allows 3D reconstruction Computationally complex (still domain specificconstraints may reduce complexity), sensitive to vehiclemovements and drifts in calibration

In the GOLD system, the removal of the perspectiveeffect from stereo images allows to obtain two imagesthat differ only where the initial assumption of a flatroad is not valid, thus detecting the free space in frontof the vehicle [1].

Analogously, the University of California researchunit developed an algorithm that remaps the left im-age using the point of view of the right image, thusdetecting disparities in correspondence with the ob-stacles. A Kalman filter is then used to track obstacles[18].

Table 2 compares the strong and weak points of thedifferent approaches to the obstacle detection problem.

As mentioned above, a great deal of different tech-niques have been proposed in the literature and testedon a number of vehicles prototype in order to solvethe Road Following problem, but only few of themprovide an integrated solution (e.g., lane detection andobstacle detection) which, obviously, leads to both animproved quality of the results and to a faster and moreefficient processing.

3.3. Hardware trends

In the early years of ITS applications a great dealof custom solutions were proposed, based on ad hoc,special-purposehardware. This recurrent choice wasmotivated by the fact that the hardware available onthe market at a reasonably low cost was not powerfulenough to provide real-time image processing capabil-ities. As an example, the researchers of the Universitätder Bundeswehr developed their own system architec-ture: several special-purpose boards were included inthe Transputer-based architecture of the VITA vehicle[11].

Others developed or acquired ad hoc processing en-gines based on SIMD computational paradigms to ex-ploit the spatial parallelism of images. Among them,the cases of the 16k Mas-Par MP-2 installed on theexperimental vehicle NavLab I [14,15] at the CarnegieMellon University (CMU) and the massively parallelarchitecture PAPRICA [6] jointly developed by theUniversità di Parma and the Politecnico di Torino andtested on the MOB-LAB vehicle.

Besides selecting the proper sensors and develop-ing specific algorithms, a large percentage of this firstresearch stage was therefore dedicated to the design,implementation, and test of new hardware platforms.In fact, when a new computer architecture is built, notonly the hardware and architectural aspects — suchas instruction set, I/O interconnections, or computa-tional paradigm — need to be considered, but softwareissues as well. Low-level basic libraries must be de-veloped and tested along with specific tools for codegeneration, optimization and debugging.

In the last few years, the technological evolutionled to a change: almost all research groups are shift-ing towards the use of off-the-shelf components fortheir systems. In fact, commercial hardware has nowa-days reached a low price/performance ratio. As anexample, both the NavLab 5 vehicle from CMU andthe ARGO vehicle from the Università di Parma arepresently driven by systems based on general-purposeprocessors. Thanks to the current availability of fastinternetworking facilities, even some MIMD solutionsare being explored, composed of a rather small num-ber of powerful, independent processors, as in the caseof the VaMoRs-P vehicle of the Universität der Bun-deswehr on which the Transputer processing systemhas now been partly replaced by a cluster of three PCs

Page 10: Overview Paper Vision-based intelligent vehicles: State of

10 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

(dual Pentium II) connected via a fast Ethernet-basednetwork [20].

Current trends, however, are moving towards amixed architecture, in which a powerful general-purpose processor is aided by specific hardware suchas boards and chips implementing optical flow com-putation, pattern-matching, convolution, and mor-phological filters. Moreover, some SIMD capabilitiesare now being transferred into the instruction set ofthe last-generation CPUs, which has been tailored toexploit the parallelism intrinsic to the processing ofvisual and audio (multimedia) data. The MMX ex-tensions of the Intel Pentium processor, for instance,are exploited by the GOLD system which acts as theautomatic driver of the ARGO vehicle to boost upperformance.

In conclusion, it is important to emphasize that,although the new generation systems are all basedon commercial hardware, the development of customhardware has not lost significance, but is gaining arenewed interest for the production of embedded sys-tems. Once a hardware and software prototype hasbeen built and extensively tested, its functionalitieshave to be integrated in a fully optimized and en-gineered embedded system before marketing. It isin this stage of the project that the development ofad hoc custom hardware still plays a fundamentalrole and its costs are justified through a large-scalemarket.

3.4. Extensive tests of prototype vehicles

Several groups researching on ITS applications haveintegrated their most promising approaches and so-lutions for automatic driving into prototypes of in-telligent vehicles. Many of these experimental resultswere presented and demonstrated during important in-ternational events such as the final meeting of thePROMETHEUS project in Paris (1994), the NAHSCdemonstration in San Diego, CA (August 1997), andthe Automated Vehicle Guidance Demo’98 in Rijn-woude, Netherlands (June 1998), and Demo’99 in EastLiberty, OH (July 1999).

Generally, these vehicles prototype underwent sev-eral tests on structured environments, such as closedtracks. However, only in a few cases extensive testswere carried out on public roads in real traffic condi-tions:

Fig. 2. The VaMP prototype vehicle.

• the VaMP prototype was experimented on the routefrom Munich (Germany) to Odense (Denmark) [21]in 1995,∗

• the RALPH system was tested on NavLab 5 througha journey (No Hands Across America[24]) fromPittsburgh, PA to San Diego, CA in 1995,† and

• the ARGO experimental vehicle was driven by theGOLD system for nearly 2000 km throughout Italyduring theMilleMiglia in Automatico Tour[4] in1998.‡

These tests allowed to experiment the vehicles inthe most different conditions and to bring out theirrobustness and weakness. The encouraging resultsobtained by all groups testify the maturity achievedby the research in this field.

3.4.1. Munich to Odense UBM testIn 1995 the VaMP autonomous vehicle developed

at the Universität der Bundeswehr München (UBM)(see Fig. 2) was experimented on a long-distance testfrom Munich (Germany) to Odense (Denmark).

The tour was aimed at testing the capability, relia-bility, and performance of highway automatic driving.In particular, the tasks that were performed automati-cally were lane keeping, longitudinal control, collisionavoidance, and lane change maneuvers. The desiredtravel speed had to be selected by a human driver,taking into account traffic signs, environmental condi-

∗ See http://www.unibw-muenchen.de/campus/LRT/LRT13/Fahr-zeuge/VaMPE.html.

† See http://www.cs.cmu.edu/afs/cs/user/pomerlea/www/nhaa.html.‡ See http://MilleMiglia.CE.UniPR.IT.

Page 11: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 11

tions and his own goals. The safety driver was also incharge of starting the automatic lane change maneu-vers and supervising the rear hemisphere, since onlya front bifocal vision system was used.

The Obstacle Detection and Tracking (ODT) mod-ule which was integrated on the VaMP vehicle isbased on a recognition process which relies on spe-cific expectations about the appearance of othertraffic participants. A combination of two differentapproaches are used. The single object detector andtracker (SODT) searches for leading vehicles in thesame lane by means of a contour following algorithmwhich extracts the left and right object boundary andthen verifies vertical axis symmetry and demands darkwheels and a smooth contour. The second approach,the multiple object detector and tracker (MODT),focuses on the dark area beneath the vehicle. Thisarea is segmented with a contour analysis and then,as a validation, the systems expect two lateral bound-aries for vehicles in its own lane and one for those inthe neighboring lanes in order to take occlusions intoaccount.

The Road Detection and Tracking (RDT) moduleestimates a state vector which describes a dynamicalmodel of the vehicle motion and of the road shape.The state vector of the vehicle is composed of thelateral offset, yaw, side slip angle, and pitch angle. Theshape of the road is characterized by both a horizontaland vertical clothoid. The estimation is performed bya Kalman filter algorithm which recursively evaluatesthe measurements of the lane markings position inthe image. Left and right lane markings are searchedfor by applying edge operators of small width in longhorizontal search windows. A straight line is then fittedto the detected edge elements.

During this test the system covered more than1600 km and a percentage of 95% was driven auto-matically. More than 400 lane change maneuvers wereperformed automatically. Driving parameters such asthe estimated horizontal curvature and estimated lanewidth have been analyzed subsequently.

Problems occurred at construction zones where thelane markings were ambiguous and could not be cor-rectly interpreted by the RDT module. A major weakpoint of the system turned out to be the limited operat-ing range of the video cameras which happened to bedazzled by the glare of the sun. Other problems weredue to the simple object models on which the ODT

module relies. In fact, these hypotheses are not onlymet by vehicles but sometimes by other objects likecertain patterns on the road or rarely bridges’ shad-ows, and this may therefore generate false alarms.

3.4.2. No Hands Across AmericaIn July 1995 researchers from the Robotics In-

stitute of Carnegie Mellon University performed atrans-continental journey from Pittsburgh, PA, to SanDiego, CA, in a Pontiac Trans Sport which drove itselfmost of the way. This trip was the culmination of overa decade of research funded by the US Department ofDefense and the US Department of Transportation.

The vehicle, called the NavLab 5 (see Fig. 3),was outfitted with a portable computer, a windshieldmounted camera, a GPS receiver, a radar system forobstacle detection as well as some other supplemen-tary equipment. The determination of road curvaturewas based on the analysis of a picture of the sceneahead and radar data. Once it had this information, itwas able to produce steering commands which keptthe vehicle in its lane (throttle and brakes were humanoperated).

The brain of NavLab 5 is a vision-based softwaresystem dubbed RALPH. As the vehicle moves along,a video camera mounted just below the rearview mir-ror reads the roadway, imaging information includinglane markings, oil spots, curbs and even ruts made insnow by car wheels. Lane detection is based on thedetermination of features running parallel to the road,aided by the flat road assumption. Obstacle detectionis mainly based on radar information. The driving sys-tem runs on the PANS (Portable Advanced NavigationSupport) hardware platform. The platform provides acomputing base and I/O functions for the system, aswell as position estimation, steering wheel control andsafety monitoring.

Thanks to this trip the researchers were able to provethe roadworthiness of on-road autonomous lane keep-ing and lateral roadway departure warning and supportsystems. The NavLab 5 was able to drive itself 98.2%of the way (2797/2849 miles). Some other statisticaldata about the tour are reported in Table 3.

The system proved to be robust with respect to realroad and traffic conditions. The major problems en-countered were due to rain, low sun reflections, shad-ows of overpasses, construction zones, road and roadmarkings deterioration. A strong point of the system,

Page 12: Overview Paper Vision-based intelligent vehicles: State of

12 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

Fig. 3. The NavLab 5 Pontiac Transport and the No Hands Across America logo.

however, is the ability to use any feature parallel tothe road (such as road boundary or tyre wear marks)in the determination of road curvature: thanks to thischaracteristic RALPH showed enforced Road Follow-ing ability even in case of poor lane markings.

3.4.3. MilleMiglia in AutomaticoThe ARGO experimental autonomous vehicle

(shown in Fig. 4) developed at the Dipartimentodi Ingegneria dell’Informazione of the Universitàdi Parma, Italy, is a Lancia Thema passenger carequipped with a vision-based system that allows toextract road and environmental information for theautomatic driving of the vehicle, and with differentoutput devices used to test the automatic features.

Only low cost passive sensors (cameras) are usedon ARGO to sense the surrounding environment: bymeans of stereo vision, obstacles on the road are

Table 3Statistical data of the No Hands Across America tour

Total autonomous statistics

Autonomous driving percentage (%) 98.2 (4503/4587 km)Longest autonomous segment (km) 111Average speed(km h−1) 102.72

detected and localized, while the processing of asingle monocular image allows to extract the roadgeometry in front of the vehicle. Furthermore, anI/O board is used to acquire information about thevelocity of the vehicle, and other user data.

Fig. 4. The ARGO experimental vehicle and the MilleMiglia inAutomatico logo.

Page 13: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 13

The approach followed for lane detection is basedon the extraction of features from a monoculargray-tone image in which the perspective effect hasbeen removed. The features are selected and matchedagainst either a straight road model (during detection)or the previous result (during tracking). Obstacledetection is based on the verification of hypothesisabout road shape such as planarity.

The ARGO vehicle has autonomous steering ca-pabilities: the result of the processing (position ofobstacles and geometry of the road) is used to drivean actuator on the steering wheel in order to followthe road; moreover human-triggered lane change ma-neuvers can be automatically performed. For debugpurposes, the result of the processing is also fed tothe driver through a set of output devices installedon-board of the vehicle: an acoustical device warnsthe driver in case dangerous conditions are detected,while a visual feedback is supplied to the driver bydisplaying the results both on an on-board monitorand on a led-based control panel.

In order to extensively test the vehicle under differ-ent traffic situations, road environments, and weatherconditions, a 2000 km journey was carried out in

Fig. 5. Statistical data regarding system performance during the MilleMiglia in Automatico tour.

June 1998. During this test, ARGO drove itself forabout 94% of the total trip along the Italian highwaynetwork, passing through flat areas and hilly regionsincluding viaducts and tunnels. The Italian road net-work is particularly suited for such an extensive testsince it is characterized by quickly varying road sce-narios which include changing weather conditionsand a generally high amount of traffic.

The analysis of the data collected during the tourallowed the computation of a number of statistics re-garding system performance (see Fig. 5). In particular,for each stage of the tour the average and the maximumspeed of the vehicle during automatic driving werecomputed. The average speed was strongly influencedby the heavy traffic conditions (especially on Torino,Milano and Roma’s by-passes) and by the presence oftoll stations, junctions, and road works. The automaticdriving percentage and the maximum distance auto-matically driven show high values despite the pres-ence of many tunnels (particularly during the Ap-pennines routes Ancona–Roma and Firenze–Bologna)and of several stretches of road with absent or wornlane markings (Ferrara–Ancona and Ancona–Roma)or even no lane markings at all (Firenze–Bologna). It is

Page 14: Overview Paper Vision-based intelligent vehicles: State of

14 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

fundamentally important to also note that some stagesincluded passing through toll stations and transitingin by-passes with heavy traffic and frequent queuesduring which the system had to be switched off.

Apart from some difficulties due to the sometimespoor road network infrastructures (absent or worn lanemarkings), the major problems encountered were thesun light’s reflection on the windscreen causing theacquisition of oversaturated images, and the cameras’inadequacy to quick changes in the illumination (suchas at the entrance or exit from a tunnel) provoking adegradation in the image quality.

4. Conclusions and perspectives on intelligentvehicles

Some common considerations can be drawn fromthe three experiments described: all of them suc-ceeded in reaching an extremely high percentage ofautomatic driving, but although in the first experi-ments some special-purpose hardware was needed,more recent experiments benefited from the latesttechnological advances and used only commercialhardware.

Anyway, though for this kind of application com-puting power do not seem to be a problem any more,still some problems remain regarding data acquisi-tion. It has been shown that the main difficulties en-countered during the demos were due to light reflec-tions and non-perfect conditions for image acquisition(wet road, direct sunshine on the cameras, tunnels andbridges’ shadows). As a common framework for thenext years of research a great deal of work will beaddressed towards the enhancement of sensor’s capa-bilities and performance, including the improvementof gain control and sensitivity in extreme illuminationconditions.

The promising results obtained in the first stages ofthe research on intelligent vehicles demonstrate thata full automation of traffic (at least on motorways orsufficiently structured roads) is technically feasible.Nevertheless, besides the technical problems there aresome issues that must be carefully considered in thedesign of these systems such as the legal aspects re-lated to the responsibility in case of faults and incor-rect behavior of the system, and the impact of auto-matic driving on human passengers.

Therefore, a long period of exhaustive tests and re-finement must precede the availability of these sys-tems on the general market, and a fully automatedhighway system with intelligent vehicles driving andexchanging information is not expected for a coupleof decades.

For the time being, complete automation will berestricted to special infrastructures such as industrialapplications or public transportation. Then, automaticvehicular technology will be gradually extended toother key transportation areas such as the shipping ofgoods, e.g., on expensive trucks, where the cost of anautopilot is negligible with respect to the cost of thevehicle itself and the service it provides. Finally, oncetechnology has been stabilized and the most promis-ing solution and best algorithms freezed, a massiveintegration and widespread use of such systems willalso take place with private vehicles, but this will nothappen for another two or more decades.

References

[1] M. Bertozzi, A. Broggi, GOLD: A parallel real-time stereovision system for generic obstacle and lane detection,IEEE Transactions on Image Processing 7 (1) (1998) 62–81.

[2] D. Bishop, Vehicle-highway automation activities in theUnited States, in: Proceedings of the International AHSWorkshop, US Department of Transportation, 1997.

[3] A. Broggi, S. Bertè, Vision-based road detection in automotivesystems: A real-time expectation-driven approach, Journal ofArtificial Intelligence Research 3 (1995) 325–348.

[4] A. Broggi, M. Bertozzi, A. Fascioli, The 2000 km test of theARGO vision-based autonomous vehicle, IEEE IntelligentSystems (1999) 55–64.

[5] A. Broggi, M. Bertozzi, A. Fascioli, G. Conte, AutomaticVehicle Guidance: The Experience of the ARGO Vehicle,World Scientific, Singapore, 1999.

[6] A. Broggi, G. Conte, F. Gregoretti, C. Sansoè, L.M.Reyneri, The evolution of the PAPRICA system, IntegratedComputer-Aided Engineering Journal (special issue onMassively Parallel Computing) 4 (2) (1997) 114–136.

[7] P. Charbonnier, P. Nicolle, Y. Guillard, J. Charrier, Roadboundaries detection using color saturation, in: Proceedingsof the Ninth European Signal Processing Conference ’98,September 1998.

[8] A. Coda, P.C. Antonello, B. Peters, Technical and humanfactor aspects of automatic vehicle control in emergencysituations, in: Proceedings of the IEEE InternationalConference on Intelligent Transportation Systems ’97,1997.

Page 15: Overview Paper Vision-based intelligent vehicles: State of

M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16 15

[9] J.D. Crisman, C.E. Thorpe, UNSCARF, a color vision systemfor the detection of unstructured roads, in: Proceedingsof the IEEE International Conference on Robotics andAutomation, Sacramento, CA, April 1991, pp. 2496–2501.

[10] S. Denasi, C. Lanzone, P. Martinese, G. Pettiti, G. Quaglia,L. Viglione, Real-time system for road following and obstacledetection, in: Proceedings of the SPIE on Machine VisionApplications, Architectures, and Systems Integration III,October 1994, pp. 70–79.

[11] E.D. Dickmanns, Expectation-based multi-focal vision forvehicle guidance, in: Proceedings of the Eighth EuropeanSignal Processing Conference ’95, Trieste, Italy, September1995, pp. 1023–1026.

[12] U. Franke, D. Gavrila, S. Görzig, F. Lindner, F. Paetzold, C.Wöhler, Autonomous driving goes downtown, in: Proceedingsof the IEEE Intelligent Vehicles Symposium ’98, Stuttgart,Germany, October 1998, pp. 40–48.

[13] J. Goldbeck, D. Graeder, B. Huertgen, S. Ernst, F. Wilms,Lane following combining vision and DGPS, in: Proceedingsof the IEEE Intelligent Vehicles Symposium ’98, Stuttgart,Germany, October 1998, pp. 445–450.

[14] T.M. Jochem, S. Baluja, Massively parallel, adaptive, colorimage processing for autonomous road following, in: H.Kitano (Ed.), Massively Parallel Artificial Intelligence, AAAIPress/MIT Press, Cambridge, MA, 1993.

[15] T.M. Jochem, S. Baluja, A massively parallel road follower,in: M.A. Bayoumi, L.S. Davis, K.P. Valavanis (Eds.),Proceedings of the IEEE Computer Architectures forMachine Perception, New Orleans, LA, December 1998,pp. 2–12.

[16] T.M. Jochem, D.A. Pomerleau, C.E. Thorpe, MANIAC: Anext generation neurally based autonomous road follower,in: Proceedings of the Third International Conference onIntelligent Autonomous Systems, Pittsburgh, PA, February1993.

[17] K.I. Kim, S.Y. Oh, S.W. Kim, H. Jeong, C.N. Lee, B.S. Kim,C.S. Kim, An autonomous land vehicle PRV II: Progressesand performance enhancement, in: Proceedings of the IEEEIntelligent Vehicles Symposium ’95, Detroit, MI, September1995, pp. 264–269.

[18] D. Koller, J. Malik, Q.-T. Luong, J. Weber, An integratedstereo-based approach to automatic vehicle guidance, in:Proceedings of the Fifth International Conference onComputer Vision, Boston, MA, 1995, pp. 12–20.

[19] W. Kruger, W. Enkelmann, S. Rossle, Real-time estimationand tracking of optical flow vectors for obstacle detection,in: Proceedings of the IEEE Intelligent Vehicles Symposium’95, Detroit, MI, September 1995, pp. 304–309.

[20] M. Lützeler, E.D. Dickmanns, Road recognition withMarVEye, in: Proceedings of the IEEE Intelligent VehiclesSymposium ’98, Stuttgart, Germany, October 1998, pp.341–346.

[21] M. Maurer, R. Behringer, F. Thomanek, E.D. Dickmanns,A compact vision system for road vehicle guidance, in:Proceedings of the 13th International Conference on PatternRecognition, Vienna, Austria, 1996.

[22] S.L. Michael Beuvais, C. Kreucher, Building world modelfor mobile platforms using heterogeneous sensors fusion andtemporal analysis, in: Proceedings of the IEEE InternationalConference on Intelligent Transportation Systems ’97, Boston,MA, November 1997, p. 101.

[23] M. Mizuno, K. Yamada, T. Nakano, S. Yamamoto, Robustnessof lane mark detection with wide dynamic range vision sensor,in: Proceedings of the IEEE Intelligent Vehicles Symposium’96, Tokyo, 1996, pp. 171–176.

[24] D.A. Pomerleau, T.M. Jochem, Rapidly adapting machinevision for automated vehicle steering, IEEE Expert 11 (2)(1996).

[25] K.A. Redmill, A simple vision system for lane keeping,in: Proceedings of the IEEE International Conferenceon Intelligent Transportation Systems ’97, Boston, MA,November 1997, p. 93.

[26] R. Risack, P. Klausmann, W. Krüger, W. Enkelmann, Robustlane recognition embedded in a real-time driver assistancesystem, in: Proceedings of the IEEE Intelligent VehiclesSymposium ’98, Stuttgart, Germany, October 1998, pp.35–40.

[27] U. Seger, H.-G. Graf, M.E. Landgraf, Vision assistance inscenes with extreme contrast, IEEE Micro (1993) 50–56.

[28] S.M. Smith, J.M. Brady, ASSET-2: Real-time motionsegmentation and shape tracking, IEEE Transactions onPattern Analysis and Machine Intelligence 17 (8) (1995) 814–829.

[29] C.G. Sodini, S.J. Decker, A 256× 256 CMOS brightnessadaptive imaging array with column-parallel digital output,in: Proceedings of the IEEE Intelligent Vehicles Symposium’98, Stuttgart, Germany, October 1998, pp. 347–352.

[30] C.J. Taylor, J. Kosecká, R. Blasi, J. Malik, A comparativestudy of vision-based lateral control strategies for autonomoushighway driving, International Journal of Robotics Research18 (5) (1999) 442–453.

[31] H. Tokuyama, Asia–Pacific projects status and plans,in: Proceedings of the International AHS Workshop, USDepartment of Transportation, 1997.

Massimo Bertozzi received the Dr. Eng.(Master) degree in Electronic Engineer-ing (1994) from the Università di Parma,Italy, discussing a Master’s Thesis aboutthe implementation of Simulation of PetriNets on the CM-2 Massive Parallel Ar-chitecture. From 1994 to 1997 he wasa Ph.D. student in Information Technol-ogy at the Dipartimento di Ingegneriadell’Informazione, Università di Parma,

where he chaired the local IEEE student branch. During this pe-riod his research interests focused mainly on the application ofimage processing to real-time systems and to vehicle guidance, theoptimization of machine code at assembly level, and parallel anddistributed computing. After obtaining the Ph.D. degree, in 1997,he has been holding a permanent position at the same department.He is a member of IEEE and IAPR.

Page 16: Overview Paper Vision-based intelligent vehicles: State of

16 M. Bertozzi et al. / Robotics and Autonomous Systems 32 (2000) 1–16

Alberto Broggi received both the Dr. Eng.(Master) degree in Electronic Engineering(1990) and Ph.D. degree in InformationTechnology (1994) from the Università diParma, Italy. From 1994 to 1998 he wasa Full Researcher at the Dipartimento diIngegneria dell’Informazione, Universitàdi Parma, Italy. Since 1998 he is Asso-ciate Professor of Artificial Intelligenceat the Dipartimento di Informatica e Sis-

temistica, Università di Pavia, Italy. His research interests includereal-time computer vision approaches for the navigation of un-manned vehicles, and the development of low-cost computer sys-tems to be used on autonomous agents. He is the coordinator of theARGO project, with the aim of designing, developing and testingthe ARGO autonomous prototype vehicle, equipped with specialactive safety features and enhanced driving capabilities. ProfessorBroggi is the author of more than 100 refereed publications ininternational journals, book chapters, and conference proceedings.Actively involved in the organization of scientific events, Profes-sor Broggi is on the Editorial Board and Program Committee ofmany international journals and conferences and has been invited

to act as Guest-Editor of journals and magazines theme issues ontopics related to Intelligent Vehicles, Computer Vision Applica-tion, and Computer Architectures for Real-Time Image Process-ing. Professor Broggi is the Newsletter Editor and member of theConference and Publication Committees of the IEEE IntelligentTransportation Systems Council and will be the Program Chair ofthe next IEEE Intelligent Vehicles Symposium, Detroit, MI, 2000.

Alessandra Fasciolireceived the Dr. Eng.(Master) degree in Electronic Engineer-ing (1996) from the Università di Parma,Italy, discussing a Master Thesis on stereovision-based obstacle localization in au-tomotive environments. In January 2000she received the Ph.D. degree in Informa-tion Technology at the Dipartimento di In-gegneria dell’Informazione, Università diParma. Her research interests focus on

real-time computer vision for automatic vehicle guidance and im-age processing techniques based on the Mathematical Morphologycomputational model. She is a member of IEEE and IAPR.