integrating motion and appearance cues for overtaking vehicle...

6
Integrating Motion and Appearance for Overtaking Vehicle Detection Alfredo Ramirez, Eshed Ohn-Bar and Mohan Trivedi, Fellow, IEEE Abstract— The dynamic appearance of vehicles as they enter and exit a scene makes vehicle detection a difficult and compli- cated problem. Appearance based detectors generally provide good results when vehicles are in clear view , but have trouble in the scenes edges due to changes in the vehicles aspect ratio and partial occlusions. To compensate for some of these deficiencies, we propose incorporating motion cues from the scene. In this work, we focus on a overtaking vehicle detection in a freeway setting with front and rear facing monocular cameras. Motion cues are extracted from the scene, and leveraging the epipolar geometry of the monocular setup, motion compensation is performed. Spectral clustering is used to group similar motion vectors together, and after post-processing, vehicle detections candidates are produced. Finally, these candidates are combined with an appearance detector to remove any false positives, outputting the detections as a vehicle travels through the scene. I. I NTRODUCTION In the last few years, multi-modal sensors for driver assistance systems have become the norm for higher-end commercial vehicles. From side and rear radar for obstacle detection, to birds-eye-view and back-up cameras for parking assistance, modern vehicles have become more aware of their surroundings. To this end, this work focuses on a monocular computer vision system for detecting vehicles that overtake the ego-vehicle in freeway settings. Overtaking vehicles pose an inherent danger to the driver because of the high velocity needed to overtake on a freeway, and because vehicles must pass through a driver’s blind spot to perform an overtaking. We chose a monocular setup for this work because there is no costly or specialized equipment required (as opposed to a radar/lidar or a stereo camera rig). It is also possible to take advantage of monocular camera systems that are already integrated into the vehicle, like a forward facing lane camera, or a backup camera. In this paper, we propose an algorithm for detecting overtaking vehicles using motion cues from the scene. Mo- tion compensation of video data is performed using the optical flow of the scene and epipolar geometry. After post processing and outlier removal, overtaking vehicle candidates are produced. By combining these outputs with the output of an appearance based vehicle detector, we are able to detect vehicles earlier in the scene and increase the accuracy of the detections when compared to only using an appearance detector. To evaluate our algorithm, extensive video from front and rear cameras was collected from freeway driving scenarios under normal traffic and weather conditions. The The authors are with the Department of Electrical and Com- puter Engineering, University of California San Diego (UCSD), La Jolla, CA, 92092 USA (e-mail: [email protected]; [email protected]; [email protected]). performance of our algorithm and of the state-of-the-art appearance based detectors was compared using this data. II. RELATED RESEARCH STUDIES Vehicle detection and tracking from a stationary camera has been a long studied problem in computer vision [1] [2]. However, on road vehicle and overtaking detection can be a much more difficult computer vision problem due to the dynamic movements of the camera and high velocity of the surrounding vehicles, and can be approached in a variety of ways [3] [4]. Appearance based vehicle detectors are very popular due to their good performance after thorough train- ing. These types of algorithms range from the deformable parts model (DPM) detector [5], which is a HOG feature and latent SVM based detector, to the algorithm shown in [6], which uses an extension of DPM to analyze 2D landmarks to generate a 3D analysis of a vehicle. These detectors can perform well when a vehicle in the scene moves relatively slowly; where the vehicle stays mostly visible within the scene and remains within the same perspective. However, appearance detectors will have trouble performing well when vehicles enter and leave the scene frequently, causing partial occlusions, or when they travel through the scene so that a different view of the vehicle is seen by the camera (e.g. front of vehicle, side of vehicle, etc.). An alternative approach to vehicle and overtaking detection is a motion-based algorithm that takes advantage of the movement within the scene. For example, in [7], optical flow is used to model and separate the background into regions. Then, the background regions are removed from the scene to detect the overtaking vehicle. More recently, Garcia et al. integrated information from a radar system with optical flow information in the overtaking region of the scene to perform the detections [8]. Although motion can be a useful tool for overtaking vehicle detection, it can be be difficult to calculate in areas with little texture (such as on roads), and is limited by the accuracy of the optical flow algorithm used. Certain groups have previously found success in fusing appearance based detectors with motion cues. In [9], optical flow in the edges of the image frame in combined with an appearance based active-learning vehicle detector in order to detect and track vehicles. III. PROPOSED METHOD A flow diagram of the proposed motion algorithm can be seen in Figure 1. We first start by calculating the optical flow from the scene. Three optical flow algorithms were considered for this work: Lucas-Kanade [10], Farneb¨ ack [11], and Brox & Malik [12]. We found that the Farneb¨ ack algorithm provided fairly accurate dense optical flow for our IEEE Intelligent Vehicles Symposium 2014

Upload: others

Post on 03-Apr-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Integrating Motion and Appearance Cues for Overtaking Vehicle …cvrr.ucsd.edu/publications/2014/vehiclemotion.pdf · 2014-06-08 · Integrating Motion and Appearance for Overtaking

Integrating Motion and Appearance for Overtaking Vehicle Detection

Alfredo Ramirez, Eshed Ohn-Bar and Mohan Trivedi, Fellow, IEEE

Abstract— The dynamic appearance of vehicles as they enterand exit a scene makes vehicle detection a difficult and compli-cated problem. Appearance based detectors generally providegood results when vehicles are in clear view , but have trouble inthe scenes edges due to changes in the vehicles aspect ratio andpartial occlusions. To compensate for some of these deficiencies,we propose incorporating motion cues from the scene. In thiswork, we focus on a overtaking vehicle detection in a freewaysetting with front and rear facing monocular cameras. Motioncues are extracted from the scene, and leveraging the epipolargeometry of the monocular setup, motion compensation isperformed. Spectral clustering is used to group similar motionvectors together, and after post-processing, vehicle detectionscandidates are produced. Finally, these candidates are combinedwith an appearance detector to remove any false positives,outputting the detections as a vehicle travels through the scene.

I. INTRODUCTION

In the last few years, multi-modal sensors for driverassistance systems have become the norm for higher-endcommercial vehicles. From side and rear radar for obstacledetection, to birds-eye-view and back-up cameras for parkingassistance, modern vehicles have become more aware of theirsurroundings. To this end, this work focuses on a monocularcomputer vision system for detecting vehicles that overtakethe ego-vehicle in freeway settings. Overtaking vehicles posean inherent danger to the driver because of the high velocityneeded to overtake on a freeway, and because vehicles mustpass through a driver’s blind spot to perform an overtaking.We chose a monocular setup for this work because thereis no costly or specialized equipment required (as opposedto a radar/lidar or a stereo camera rig). It is also possible totake advantage of monocular camera systems that are alreadyintegrated into the vehicle, like a forward facing lane camera,or a backup camera.

In this paper, we propose an algorithm for detectingovertaking vehicles using motion cues from the scene. Mo-tion compensation of video data is performed using theoptical flow of the scene and epipolar geometry. After postprocessing and outlier removal, overtaking vehicle candidatesare produced. By combining these outputs with the output ofan appearance based vehicle detector, we are able to detectvehicles earlier in the scene and increase the accuracy ofthe detections when compared to only using an appearancedetector. To evaluate our algorithm, extensive video fromfront and rear cameras was collected from freeway drivingscenarios under normal traffic and weather conditions. The

The authors are with the Department of Electrical and Com-puter Engineering, University of California San Diego (UCSD), LaJolla, CA, 92092 USA (e-mail: [email protected]; [email protected];[email protected]).

performance of our algorithm and of the state-of-the-artappearance based detectors was compared using this data.

II. RELATED RESEARCH STUDIES

Vehicle detection and tracking from a stationary camerahas been a long studied problem in computer vision [1] [2].However, on road vehicle and overtaking detection can bea much more difficult computer vision problem due to thedynamic movements of the camera and high velocity of thesurrounding vehicles, and can be approached in a variety ofways [3] [4]. Appearance based vehicle detectors are verypopular due to their good performance after thorough train-ing. These types of algorithms range from the deformableparts model (DPM) detector [5], which is a HOG feature andlatent SVM based detector, to the algorithm shown in [6],which uses an extension of DPM to analyze 2D landmarksto generate a 3D analysis of a vehicle. These detectors canperform well when a vehicle in the scene moves relativelyslowly; where the vehicle stays mostly visible within thescene and remains within the same perspective. However,appearance detectors will have trouble performing well whenvehicles enter and leave the scene frequently, causing partialocclusions, or when they travel through the scene so that adifferent view of the vehicle is seen by the camera (e.g. frontof vehicle, side of vehicle, etc.). An alternative approach tovehicle and overtaking detection is a motion-based algorithmthat takes advantage of the movement within the scene. Forexample, in [7], optical flow is used to model and separatethe background into regions. Then, the background regionsare removed from the scene to detect the overtaking vehicle.More recently, Garcia et al. integrated information from aradar system with optical flow information in the overtakingregion of the scene to perform the detections [8]. Althoughmotion can be a useful tool for overtaking vehicle detection,it can be be difficult to calculate in areas with little texture(such as on roads), and is limited by the accuracy of theoptical flow algorithm used. Certain groups have previouslyfound success in fusing appearance based detectors withmotion cues. In [9], optical flow in the edges of the imageframe in combined with an appearance based active-learningvehicle detector in order to detect and track vehicles.

III. PROPOSED METHOD

A flow diagram of the proposed motion algorithm can beseen in Figure 1. We first start by calculating the opticalflow from the scene. Three optical flow algorithms wereconsidered for this work: Lucas-Kanade [10], Farneback[11], and Brox & Malik [12]. We found that the Farnebackalgorithm provided fairly accurate dense optical flow for our

IEEE Intelligent Vehicles Symposium 2014

Page 2: Integrating Motion and Appearance Cues for Overtaking Vehicle …cvrr.ucsd.edu/publications/2014/vehiclemotion.pdf · 2014-06-08 · Integrating Motion and Appearance for Overtaking

Fig. 1: The proposed method for motion based detection of overtaking vehicles.

(a) (b) (c)

Fig. 2: The result of the forward-backward consistency check for optical flow vectors. Notice that most of the incorrect flowvectors have been removed, especially on the road and near the edges of the image, where there are occlusions in the scene.(a) The sampled forward flow vectors. (b) The backwards flow vectors. (c) The corrected flow vectors.

algorithm, at a reasonable frame rate. However, the opticalflow algorithm still produced many erroneously trackedpoints (as seen in Figure 2).

A. Pre-Processing

To eliminate incorrectly tracked points, a backwards-forwards consistency check is performed. Backwards andforwards dense optical flow is calculated between consec-utive video frames and sampled at strong corners withinthe images to produce flow vectors. Strong corners areeasier to track and are more likely to produce accurateflow vectors, since their appearance will remain relativelyuniform as vehicles move through the scene. We then com-pare backwards vector originating from the endpoint of aforwards vector, and ensure they are consistent (i.e. sum to

0) Inconsistent flow vectors can occur in areas where theoptical flow can be incorrect, such as at edges of the frame,or in areas with occlusions. Therefore, we can remove thesefrom consideration to improve the overall accuracy of theoptical flow. The consistency check used in this algorithm isimplemented as:

||Vf − Vb||2 ≤ tc (1)

Where Vf and Vb are the forward and backward flow vectors,respectively, and tc is the consistency threshold. An exampleof this procedure can be seen in Figure 2.

B. Vehicle Candidate Detection

Next, the corrected flow vectors are used with the movingcamera setup to take advantage of the stereo (epipolar)geometry of the scene. The flow vectors give corresponding

IEEE Intelligent Vehicles Symposium 2014

Page 3: Integrating Motion and Appearance Cues for Overtaking Vehicle …cvrr.ucsd.edu/publications/2014/vehiclemotion.pdf · 2014-06-08 · Integrating Motion and Appearance for Overtaking

Fig. 3: The results of the spectral clustering on the sampledoptical flow vectors. Each cluster is displayed with a distinctcolor. The larger, filled circles show the centroid of thecluster, and the black line indicates the average flow vectormagnitude and direction of the cluster.

point pairs for consecutive frames, and with the knowledgethat the camera is moving (since the vehicle is moving),the fundamental matrix can be calculated for the frames.The fundamental matrix provides geometric constraints on apair of stereo images, which can provide information aboutthe scene, such as how the camera is moving. With thepoint pairs, the fundamental matrix is estimated the 8-pointalgorithm [13] and random sampling consensus (RANSAC)to remove any leftover incorrect matches. Finally, the epipole(i.e. the focus of expansion) location in the first image isextracted from the fundamental matrix.

The corrected flow vectors are grouped together by sim-ilarity in flow and location using the spectral clusteringalgorithm recommended in [14]. The output of the clusteringalgorithm can be seen in Figure 3. We found that the spectralclustering algorithm outperformed simple k-means clusteringin outlining objects in the scene more closely.

With the motion vectors clustered, we must now isolatethe clusters belonging to overtaking vehicles. For a forwardmoving vehicle, objects in the scene which are stationary ormoving slower than the ego-vehicle will have motion vectorsmoving away from the epipole. Objects moving in parallelwith the ego-vehicle but at a higher velocity (i.e. overtakingthe ego-vehicle) will have motion vectors moving towards theepipole. To check if a vector is moving towards the epipole,we use the following equation:

v · lv ≤ tmotion (2)

Where v is the motion vector in question, lv is a unit vectorlocated at the origin at v and pointing towards the epipole,and tmotion is a threshold. The result of the above equationwill be large if both vectors point in similar directions, andsmall if they point in opposite directions. tmotion controlshow much the angle between the vectors can vary.

C. Post Processing

The spectral clustering algorithm has the tendency to over-cluster the motion vectors, therefore oversegmenting objects

Fig. 4: Sample output detections of the motion-only algo-rithm. Note that the vehicles are being detected as they enterthe scene, even though they are still partially occluded.

in the scene. To reduce the number of clusters, we join clus-ters with similar average motion and in close proximity. Twoclusters are merged if the following inequality is satisfied:

||ci − cj ||2 + ||vi − vj ||2 ≤ tsimilar (3)

Where ci and cj are the centroid locations of the ith andjth clusters, and vi and vj are the average flow vectors ofthe clusters. Here, we assume that clusters belonging to thesame object in the scene will be near each other and havesimilar motion.

The final step in the motion algorithm is to clean themerged clusters in order to produce the detection candidates.The final clusters sometimes include outlier motion vectorsthat are a large distance away from the rest of the motionvectors in the cluster. In order to remove these outlier vectors,we calculate the sample standard deviation of the x and ycoordinates for each cluster. Motion vectors that are morethan two standard deviations away from the centroid in thex or y direction are removed from the cluster. Next, boundingboxes for all of the clusters are calculated. To remove falsepositives from the final output, clusters with few of motionvectors or too small a bounding box are removed from thefinal detections. This is done because overtaking vehiclesthat are near the ego-vehicle in the scene will appear largein the image. Therefore, small clusters or clusters with smallbounding boxes are most likely a false positive detection, orvehicles that are far away in the scene, which do not needto be detected. Finally, the remaining vehicle candidates are

IEEE Intelligent Vehicles Symposium 2014

Page 4: Integrating Motion and Appearance Cues for Overtaking Vehicle …cvrr.ucsd.edu/publications/2014/vehiclemotion.pdf · 2014-06-08 · Integrating Motion and Appearance for Overtaking

Fig. 5: The LISA-X testbed is equipped with multiplecameras, both inside and out, as well as radar systems andother sensors.

outputted as bounding boxes, which can be seen in Figure4.

D. Integration of Motion Algorithm and Appearance Detec-tor

Integration occurs in two ways. The sets of vehicle boxesfrom the two alogrithms are joined by taking their unionand removing overlapping instances using non-maximal sup-pression. Secondly, boxes without sufficient flow after ego-motion compensation are removed. We found these two tech-niques significantly improve performance of object detectors.Additionally, early detections from the motion algorithm areadded to the detection sequence.

IV. EXPERIMENTAL SETUP

The dataset used for the experiment was collected usingthe LISA-X vehicle testbed, which can be seen in Figure5. Two Point Grey cameras with Theia wide-angle lensesare mounted on the outside of the testbed: one mountedon the windshield pointing forward, and one attached to therear-roof of the vehicle pointing backwards. These camerascollect 896x672 pixel monochrome video at 20 fps. Thedataset consists of video of naturalistic driving under free-way settings with medium traffic. Ground truth overtakinginstances of the data was annotated with bounding boxes for1327 frames containing 20 overtakings. The ground truthdata is used to evaluate the tested algorithms by comparingthe amount of overlap between bounding boxes. For evalua-tion, we found a great improvement when comparing to thestandard Haar like+Viola Jones method [15] trained on frontand rear vehicles, as vehicles would often be missed at thesides of the vehicle in close proximity with changing of theaspect ratio. Instead, we turned to a recent object detector thatalso allows for changing of the aspect ratio and object view.The algorithm used for this is the deformable part model in[5].

To test a more extreme case, we also evaluated themotion only algorithm using a stitched wide-angle view from[16]. The stitched view contains artifacts from the stitchingprocess, such as paralax errors in the overlapping regions,

and stretching around the edges of the image, that wouldcause problems for vehicle detectors. A sample frames ofthis data can be seen in Figure 6. We use this view to testnon-ideal video inputs, and to simulate video effects fromsurround view equipment, including the Point Grey Ladybugcamera system, or omnidirectinoal cameras, such as in [17].The dataset consists of 22 minutes of video and was gatheredunder similar conditions as the previous data, but limited tothe rear view of the ego-vehicle.

V. RESULTS

In this section we evaluate the approach for motionintegration with the state-of-the-art vehicle detectors from[5], referred to as DPM. The DPM algorithm would producedetections only when the vehicle is entirely visible and haveentered the scene. At these moments, the vehicle boundingbox can be quite large, and although the motion-basedalgorithm is efficient at detecting the presence of a vehicle, itsometimes produces non-tight bounding boxes. Nonetheless,with a 0.2 overlap requirement for a true positive, Figure7 shows a significant improvement of the method by mo-tion integration. We evaluate two integration techniques, themotion compensation (referred to as MC) which allows forremoval of false positives and the use of motion cues forvehicle box generation (referred to as MB) in addition to theboxes generated by the baseline appearance detector.

As previously mentioned, although the motion cues arestrong in the proximity of the ego-vehicle and are reliablein identifying the presence of a vehicle, the bounding boxesgenerated are not always tight. We therefore evaluate theeffect of changing the overlap (intersection over union)threshold needed to produce a true detection in Figure 8.The greatest improvement comes when relaxing the overlap.For instance, the immensely popular DPM scheme can begreatly improved by introducing motion cues. The strengthof our approach is enhanced by the fact that no training isneeded, and can therefore be generalized for improvementof any existing appearance-based object detectors. Off theshelf classification algorithms can be enhanced to producea more continuous object detection trajectory. Nonetheless,even with the outlier removal and post-processing, furtherwork needs to be done in order to guarantee tightness ofmotion-boxes. Even so, the method shows much promise,and the DPM+MC+MB solution is competitive with othertechniques at less tight overlap requirements. The motionalgorithm as a whole takes approximately 1 second per frameusing a GPU implementation of the optical flow algorithm.This is faster than the baseline method used in evaluation inthis paper, yet future speedups and optimization are needed.

In evaluating the performance of the motion-only algo-rithm on the stitched rear view, we found that even in non-ideal settings, the algorithm still performed moderately well.The motion only algorithm was able to achieve a true positiverate (tpr) of .4618 and a false detection rate (fdr) of .8231. Incomparison, the DPM algorithm was able to achieve a tpr of.4218 and an fdr of .6409 on the same data. This performancetpr is expected, as the motion cues persist and stay strong

IEEE Intelligent Vehicles Symposium 2014

Page 5: Integrating Motion and Appearance Cues for Overtaking Vehicle …cvrr.ucsd.edu/publications/2014/vehiclemotion.pdf · 2014-06-08 · Integrating Motion and Appearance for Overtaking

Fig. 6: Samples of the stitched view data with vehicle detections produced by the motion only algorithm. Observe the heavydistortion visible in the detected vehicles.

0 0.1 0.2 0.3 0.4 0.50.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

False Detections Rate

Tru

e P

ositiv

es R

ate

DPM+MC+MB

DPM+MC

DPM

MotionOnly

Fig. 7: Evaluation of the baseline vehicle detection schemes,the deformable parts model (DPM) from [5], with andwithout the proposed motion integration at a 0.2 overlapthreshold. A significant advantage is shown by integrationof the motion-based vehicle detection boxes (MB) and thefalse positive removal using motion compensation (MC).

while the appearance of vehicles might change significantlyafter the stitching process. As for the higher fdr, we foundthat the motion-only algorithm tended to separate vehiclesinto multiple detections near the edges of the scene. This isdue to the stretching of vehicles that occurs near the edges,and could be corrected with a more robust cluster joiningalgorithm.

VI. CONCLUDING REMARKS AND FUTURE WORK

In this paper, we studied a method for improving anappearance-based vehicle detector using motion cues in the

0.2 0.3 0.4 0.5 0.6 0.70.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Overlap Threshold

Tru

e P

ositiv

es R

ate

at F

DR

= 0

.1

DPM+MC+MB

DPM+MC

DPM

Fig. 8: Varying the overlap threshold for a correct detec-tion. Notice how motion boxes improve upon the baselinealgorithm. The motion boxes are less accurate but provide asignificant improvement at less strict overlap thresholds.

context of overtaking vehicle detection. Reliable motion cueextraction is a difficult task, and so a scheme of processingdense optical flow was used. We compared the performanceto other vehicles detectors in order to judge its relativeeffectiveness, and found an improvement in vehicle detectionperformance. In the future, tighter bounding boxes may begenerated through more robust outlier removal algorithms.Additionally, incorporating long term motion trajectories,as in [18], could improve the overall performance of thealgorithm. Furthermore, it might prove useful to integrateour overtaking vehicle detector into other driver assistancesystems, such as a system for merge recommendations [19].

IEEE Intelligent Vehicles Symposium 2014

Page 6: Integrating Motion and Appearance Cues for Overtaking Vehicle …cvrr.ucsd.edu/publications/2014/vehiclemotion.pdf · 2014-06-08 · Integrating Motion and Appearance for Overtaking

VII. ACKNOWLEDGMENT

The authors would like to thank the support of associatedindustry partners, the reviewers for their constructive com-ments, and the members of the Laboratory for Intelligent andSafe Automobiles (LISA) for their assistance.

REFERENCES

[1] B. Morris and M. Trivedi, “Robust classification and tracking ofvehicles in traffic video streams,” in ITSC, 2006.

[2] B. Morris and M. M. Trivedi, “Trajectory learning for activity under-standing: Unsupervised, multilevel, and long-term adaptive approach,”in PAMI, 2011.

[3] Z. Sun, G. Bebis, and R. Milleri, “On-road vehicle detection: Areview,” in PAMI, 2006.

[4] S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road:A survey of vision-based vehicle detection, tracking and behavioranalysis,” in ITS, 2013.

[5] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan,“Object detection with discriminatively trained part-based models,” inPAMI, vol. 32, no. 9, Sep 2010.

[6] M. Hejrati and D. Ramanan, “Analyzing 3d objects in clutteredmages,” in NIPS, 2012, pp. 602–610.

[7] J. Wang, G. Bebis, and R. Miller, “Overtaking vehicle detection usingdynamic and quasi-static background modeling,” in CVPR, 2005.

[8] F. Garcia, P. Cerri, A. Broggi, A. De la Escalera, and J. Armingol,“Data fusion for overtaking vehicle detection based on radar andoptical flow,” in IV, 2012.

[9] E. Ohn-Bar, S. Sivaraman, and M. Trivedi, “Partially occluded vehiclerecognition and tracking in 3d,” in IV, June 2013.

[10] B. Lucas and T. Kanade, “An iterative image registration techniquewith an application to stereo vision,” in Proc. of Imaging Unertstand-ing Workshop, 1981.

[11] G. Farneback, “Two-frame motion estimation based on polynomialexpansian,” in The Scandinavian Conf. on Image Analysis, 2003.

[12] T. Brox and J. Malik, “Large displacement optical flow: Descriptormatching in variational motion estimation,” PAMI, vol. 33, no. 2, pp.500–513, 2011.

[13] R. Hartley, “In defense of the eight-point algorithm,” PAMI, 1997.[14] T. Sakai and A. Imiya, “Practical algorithms of spectral cluster-

ing: Toward large-scale vision-based motion analysis,” in MachineLearning for Vision-Based Motion Analysis, ser. Advances in PatternRecognition. Springer London, 2011, pp. 3–26.

[15] P. Viola and M. Jones, “Robust real-time object detection,” in IJCV,2001.

[16] A. Ramirez, E. Ohn-Bar, and M. M. Trivedi, “Panoramic stitchingfor driver assistance and applications to motion saliency-based riskanalysis,” in ITSC, Oct. 2013.

[17] T. Gandhi and M. Trivedi, “Parametric ego-motion estimation for ve-hicle surround analysis using an omnidirectional camera,” in MachineVision and Applications, Feb. 2005.

[18] K. Fragkiadaki, G. Zhang, and J. Shi, “Video segmentation by tracingdiscontinuities in a trajectory embedding,” in CVPR, June 2012.

[19] S. Sivaraman, M. Trivedi, M. Tippelhofer, and T. Shannon, “Mergerecommendations for driver assistance: A cross-modal, cost-sensitiveapproach,” in IV, June 2013.

IEEE Intelligent Vehicles Symposium 2014