towards detection of abnormal vehicle behavior using tra c...

12
Towards Detection of Abnormal Vehicle Behavior Using Traffic Cameras Chen Wang, Aibek Musaev, Pezhman Sheinidashtegol, and Travis Atkison The University of Alabama, Tuscaloosa, AL 35487 Abstract. Throughout the world, many surveillance cameras are being installed every month. For example, there are over 18,000 publicly acces- sible traffic cameras in 200 cities and metropolitan areas in the United States alone. Live video streams provide real-time big data about behav- ior happening in the present, such as traffic information. However, until now, extracting intelligence from video content has been mostly man- ual, i.e. through human observation. The development of smart real-time tools that can detect abnormal vehicle behaviors may alert law enforce- ment and transportation agencies of possible violators and can poten- tially avoid traffic accidents. In this study, we address this problem by developing an application for detection of abnormal driving behavior us- ing traffic video streams. Evaluation is performed using real videos from traffic cameras to detect stalled vehicles and possible abnormal vehicle behavior. Keywords: Traffic behavior, object detection, object tracking, anomaly detection 1 Introduction Each year 20-50 million people are injured or disabled in traffic accidents, and unless action is taken, road traffic injuries are predicted to become the fifth lead- ing cause of death by 2030 [26]. Therefore, a number of intelligent transportation systems have been developed in an attempt to improve the traffic control systems for road safety [33, 8, 20, 1]. One of the promising aspects of such systems is the emergence of traffic video processing systems [30] due to recent advancements in deep learning and computer vision [15, 23]. In this paper, a state-of-the-art method for real-time object detection is ap- plied to facilitate the development of an application for determining abnormal vehicle behavior using traffic cameras. Studies show that most traffic accidents are caused by human factors, including drivers’ abnormal driving behaviors [27]. Our objective is to determine vehicles whose driving behavior is abnormal com- pared to the nearby vehicles. Specifically, we want to detect stalled vehicles as well as vehicles that drive either significantly faster or slower than the rest of the traffic. The proposed algorithm for detection of abnormal vehicle behavior comprises the following steps:

Upload: others

Post on 17-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

Towards Detection of Abnormal VehicleBehavior Using Traffic Cameras

Chen Wang, Aibek Musaev, Pezhman Sheinidashtegol, and Travis Atkison

The University of Alabama, Tuscaloosa, AL 35487

Abstract. Throughout the world, many surveillance cameras are beinginstalled every month. For example, there are over 18,000 publicly acces-sible traffic cameras in 200 cities and metropolitan areas in the UnitedStates alone. Live video streams provide real-time big data about behav-ior happening in the present, such as traffic information. However, untilnow, extracting intelligence from video content has been mostly man-ual, i.e. through human observation. The development of smart real-timetools that can detect abnormal vehicle behaviors may alert law enforce-ment and transportation agencies of possible violators and can poten-tially avoid traffic accidents. In this study, we address this problem bydeveloping an application for detection of abnormal driving behavior us-ing traffic video streams. Evaluation is performed using real videos fromtraffic cameras to detect stalled vehicles and possible abnormal vehiclebehavior.

Keywords: Traffic behavior, object detection, object tracking, anomalydetection

1 Introduction

Each year 20-50 million people are injured or disabled in traffic accidents, andunless action is taken, road traffic injuries are predicted to become the fifth lead-ing cause of death by 2030 [26]. Therefore, a number of intelligent transportationsystems have been developed in an attempt to improve the traffic control systemsfor road safety [33, 8, 20, 1]. One of the promising aspects of such systems is theemergence of traffic video processing systems [30] due to recent advancementsin deep learning and computer vision [15, 23].

In this paper, a state-of-the-art method for real-time object detection is ap-plied to facilitate the development of an application for determining abnormalvehicle behavior using traffic cameras. Studies show that most traffic accidentsare caused by human factors, including drivers’ abnormal driving behaviors [27].Our objective is to determine vehicles whose driving behavior is abnormal com-pared to the nearby vehicles. Specifically, we want to detect stalled vehicles aswell as vehicles that drive either significantly faster or slower than the rest ofthe traffic.

The proposed algorithm for detection of abnormal vehicle behavior comprisesthe following steps:

Page 2: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

• Step 1: vehicle detection using live traffic video streams• Step 2: vehicle tracking using detected vehicles• Step 3: traffic anomaly detection using tracked vehicles

Vehicle detection is performed using a state-of-the-art method for objectdetection (You Only Look Once (YOLO)) in each frame of the video stream.The YOLO algorithm generates a list of all detected objects, such as cars andtrucks, and their bounding boxes in Step 1. This output is then used as inputto the object tracking algorithm based on a Kalman filter in Step 2. The objecttracking algorithm associates vehicles with unique IDs, such that each vehicle’spath can be tracked through consecutive frames. Finally, the number of framesin which a vehicle appears is used as a relative measure of its speed for anomalydetection in Step 3.

2 Related Work

2.1 Object detection

Detecting vehicles in a video stream is an object detection problem. Objectdetection is a core problem in computer vision. Fast, robust object detectionsystems are fundamental to the success of next-generation video processing sys-tems. Such systems are capable of searching for a specific class of objects, suchas faces, people, dogs, airplanes, or vehicles [34].

The types of technologies used for object detection have largely been dictatedby computing power. For instance, early systems for object detection in station-ary images generally used edge detection and simple heuristics [35]. Modernsystems that have access to much more storage and processing power, includingGPU accelerators, are able to use large sets of training data to derive complexmodels of different objects efficiently [24].

Recently, deep convolutional neural networks have made significant advance-ments in image classification [15] and object detection [9]. Note that objectdetection is a far more challenging task compared to image classification as itrequires more complex approaches to solve. Complexity arises due to the needfor the accurate localization of objects. Modern systems approach object lo-calization in either a multi-stage pipeline (RCNN [10], Fast RCNN [9], FasterRCNN [24]) or in a single shot manner (YOLO [23], SSD [19]). The multi-stagepipeline approaches all share one feature in common: one part of their pipelineis dedicated to generating region proposals followed by a high quality classifierto classify those proposals. These methods are very accurate, but come at a highcomputational cost (low frame-rate); in other words, they are not well suited forreal-time vehicle detection.

An alternative way of doing object detection is by combining these two tasksinto one network. This is possible because instead of having a network pro-duce region proposals, an image is split into a grid of fixed boxes to look forobjects. A single convolutional neural network simultaneously predicts multiplebounding boxes and class probabilities for those boxes. You Only Look Once

Page 3: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

(YOLO) is a state-of-the-art algorithm among single shot detector approachesfor object detection [23]. YOLO trains on full images and is the fastest approachwhile maintaining high accuracy. This makes it a good fit for real-time vehicledetection.

A growing body of literature are applying YOLO and other related deeplearning methods to detecting moving objects using a stationary camera: in [33],YOLO is applied in combination with convolutional neural networks to detectvehicle lane changes; in [7], Histogram of Oriented Gradient (HOG) method isused for pedestrian detection; in [32], a multi-directional CNN method is used forvehicle license plate detection; in [11], a comparison is performed among differentdeep learning methods for effective detection purposes based on their reliabilityand repeat-ability. However, very few papers apply the above mentioned methodsto detection of abnormal behavior of vehicles on roads.

In the proposed project, YOLO is used in combination with the Kalman fil-ter algorithm to detect the abnormal vehicles, which can provide a new insightfor transportation surveillance techniques. In fact, related studies mostly focuson vehicles instead of their trajectories [2, 25, 11], including vehicle shifting,drifting and platooning rather than detecting speeding cases using a stationarysurveillance camera. For the remainder of this paper, a new perspective is pre-sented on separating the camera field of view (FOV) into 2 sections - one for eachtraffic direction - and analyzing them separately. The model is implemented inMATLAB using the YOLO algorithm for object detection and a Kalman filterfor object tracking resulting in a log file with tracked vehicles.

2.2 Object tracking

Object tracking is one of the fundamental challenges in computer vision [34].The goal of object tracking is to accurately locate objects of interest based ona series of consecutive video frames. The increasing need for automated videoanalysis has produced a lot of research on object tracking algorithms for variousapplication areas, including surveillance [3], vehicle tracking [17], robotics [16],and medical imaging [29].

The task of estimating the motion path of an object in successive videoframes consists of two steps: object detection and object tracking. As shown inSection 2.1, the state-of-the-art approaches in object detection generate bound-ing boxes for detected objects in addition to the probability of detecting a givenclass of objects, such as cars, trucks, or persons.

The vehicle tracking algorithms used in intelligent surveillance of traffic datacan be classified into four major categories: region-based tracking algorithms,contour-based tracking algorithms, feature-based tracking algorithms and model-based tracking algorithms [13]. The choice of these methods depends on theapproach chosen for the object detection and the output generated by it. In thisresearch effort, the output is provided as bounding boxes in consecutive frames,which is why an approximation of a contour-based tracking algorithm is chosen.

Page 4: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

2.3 Traffic anomaly detection

Previous studies have analyzed the detection of abnormal traffic behavior. [14]utilizes Markov random fields (MRF) and hidden Markov model (HMM) in rec-ognizing abnormal vehicle behavior at intersections. The authors in [14] use a2-D image to generate an object’s moving trajectory and remap it within tex-tures. In [5], the anomalous behavior is detected based on trajectory patternlearning module, such that a cluster of main flow direction (MFD) vectors gen-erates the classification of trajectories. In [28], the author proposes a methodbased on multi-lane trajectories by comparing dense and free-flowing traffic con-ditions using Kalman filters. In [18], the temporal outliers on roads are detectedbased on historical similarity trends.

Most of the outlier detection studies focus on lane changes, hence the speedingcase is relatively difficult to capture from trajectory analysis as it does not involvea lot of lane changes. In this paper, the authors introduce a new method ofabnormal behavior detection focusing only on the speeding scenarios. In practice,road police has a tendency of pulling over outliers as opposed to over-speedingvehicles in general. Thus, this intuition can be realized by comparing the speedof vehicle under test (VUT) with the nearby vehicles on roads.

3 Proposed Framework

3.1 Vehicle detection using live traffic video streams

The first step of the proposed framework is to detect vehicles that appear in thevideo. By applying the YOLO algorithm, a detected vehicle - regardless whetherit is moving or stationary - will be confined within a bounding box. Next, thevideo is split into frames at a fixed sampling rate. The vehicle speed can berepresented by the number of frames it appears in, such that the more frames itappears in, the slower the vehicle is, and vice versa.

For the surveillance video streams, it is helpful to separate the FOV intothree parts as shown in Fig. 1. Segments I and II represent each traffic direction,segment III is the farther end of the camera FOV, where the vehicles become toosmall to be recognized. This separation is important for the following reasons:1) to remove the top part of the video where vehicles are too small for detection,so that YOLO’s recognition accuracy can be improved, and 2) to reduce thenumber of objects to process for faster computation.

Note that the location stamp of the camera was hidden from the figure dueto privacy reasons.

3.2 Vehicle tracking using detected vehicles

Using the vehicles detected in the previous step, the goal is to track them throughmultiple consecutive video frames by associating each vehicle with a unique ID.The tracking of objects is based solely on motion. The motion of each track isestimated by a Kalman filter, which is widely used in object tracking [36, 6].

Page 5: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

I

IIIII

Fig. 1. Camera’s field of view is split into regions: segments I and II represent oppositelanes, while segment III is removed as vehicles are too small for object recognition

A Kalman filter is used to predict a vehicle’s location in each video frame, andto determine the likelihood of each location being associated with the vehicle’sID. Maintenance of vehicle IDs becomes a critical aspect of the object tracking.A list of vehicle IDs is maintained. In a given video frame, some locations maybe assigned to vehicle IDs, while others may remain unassigned. The unassignedvehicle IDs are marked as invisible. An unassigned location is associated with anew vehicle ID.

Each vehicle ID keeps account of the number of consecutive frames whereit remained unassigned. If the count exceeds a certain threshold then it is as-sumed to have left the FOV, so it gets deleted. A MATLAB example scriptcalled ”Motion-Based Multiple Object Tracking” is used to implement vehicletracking [22]. The script is modified as follows: the object detection system basedon the background subtraction algorithm is replaced with the YOLO detection.

3.3 Traffic anomaly detection using tracked vehicles

The proposed method applies the YOLO algorithm combined with a Kalmanfilter to track the vehicles, then to determine the vehicles whose whose drivingbehavior is abnormal compared with their neighbors. The anomaly detectionis performed by comparing the speed differences δv between the VUT and itsclose neighbors. If ∆v is significant, then the VUT is considered to be an outlier.

Page 6: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

An authoritative source, namely the national speeding index, is proposed fordetermining such significance.

Some statistical facts from the index are: 1) in the 2010 census issued by theU.S. Census Bureau [12], the working-age population represented 112.8 millionpeople (36.5% over entire US population); 2) 83% of U.S. adults drive on a dailybasis per Gallup statistics [4]; and 3) based on Stanford OpenPolicing [31], policeofficers pull over more than 50,000 drivers nationwide on a typical day due tospeeding. Thus, the speed citation rate ct can be roughly estimated as:

ct =50 · 103

83% · 112.8 · 106≈ 0.05% (1)

Hence, based on the mentioned police report, the threshold can be set at0.05%.

∆s , ct = 0.05% (2)

4 Evaluation using Real Data

In this section, two experiments are presented for evaluating the proposed frame-work. First, stalled vehicles are detected using a 19-minute long video from atraffic camera located in Mississippi [21]. In the video a disabled car is located onthe shoulder of a road. This is an extreme case of abnormal driving behavior. Inthe second experiment, vehicles are detected that are either significantly fasteror slower than the rest of the traffic using a 53-hour long video from a trafficcamera also located in Mississippi.

The collected traffic data are processed with a normal distribution function,the resulting distribution curve is shown in Fig. 2.

4.1 Detection of stalled vehicles

The previously determined threshold can be applied to the traffic flow such thatthe average speed citation ratio ct is fixed at 0.05%. Note that a stalled vehiclebecomes a stationary object for a relatively long period of time from the cameraperspective, such that it appears in a significant number of frames. If the vehicleis stalled long enough, a large frame number gradually accumulates over timeand eventually exceeds the average frame number for all of the passing vehicles.Under these conditions, the pre-defined ct is no longer applicable, hence a newthreshold should be computed for the stalled vehicles. Here in Fig. 2(c), the newlydetermined threshold is at 1.38%, where the threshold is rolling dynamicallyindicating different vehicle behavior patterns.

Fig. 3 shows a typical abnormal traffic behavior captured by the surveillancecamera in Mississippi. The white sedan on the curb is the disabled vehicle, whilethe white SUV next to it is driving normally; the other two vehicles in the leftlane are also stopped on the road. This situation was maintained a relativelylong time before police arrived.

Page 7: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

(a)

(b)

(c)

0.0000

0.0050

0.0100

0.0150

0.0200

0.0250

0.0300

0.0350

0.0400

0.0450

050100150200250300350400450500

28 32 37 41 45 50 54 59 63 67 72 76 80 85 89 93 98 102 106 111 115 120 124 128 133 137

FREQ. NORMAL

0.0000

0.0050

0.0100

0.0150

0.0200

0.0250

0.0300

0.0350

0

50

100

150

200

250

300

350

400

28 32 37 41 45 50 54 59 63 67 72 76 80 85 89 93 98 102 106 111 115 120 124 128 133 137

FREQ. NORMAL

0.0000

0.0001

0.0001

0.0002

0.0002

0.0003

0.0003

0.0004

0.0004

01002003004005006007008009001000

13 3066008931187148017742067236026542947324135343828412144144708500152955588588261756469676270557349

FREQ. NORMAL

Fig. 2. Normal distribution and frequency histogram of (a) left lane (incoming) trafficflow, (2) right Lane (outgoing) traffic flow, and (c) stalled or disabled vehicle detection.

Page 8: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

As mentioned earlier, the stalled vehicles in Fig. 3 will be detected over andover again, creating an excessive number of frames and making it possible todetect them. The described detection framework helps find and locate abnormalbehaviors in nearly real-time. Combined with the effort of law enforcement andtransportation agencies, it may facilitate the development of a safer and moreintelligent traffic management system.

4.2 Detection of abnormal vehicles

Fig. 3. Vehicle detection using a snapshot from an actual traffic camera

The result of vehicle detection using a snapshot from an actual traffic cameralocated in Mississippi is shown in Figure 3. Note that despite the low resolutionof the image from the video, all three cars and a truck have been correctlydetected, and their bounding boxes have been accurately determined.

For anomaly detection, this paper uses the number of frames for each vehiclein the video: the more number of frames one vehicle is in, the slower this vehicleis, and vice versa. In addition, in some cases the YOLO algorithm was not ableto detect vehicles in every successive frame, hence the Kalman filter algorithmhelps tracking the vehicles overlooked by YOLO and then adds the number offrames in order to increase the accuracy. In the processed data, a rolling selectioncompares the VUT with its nearby vehicles on roads, then checks the averagenumber of frames of the total N vehicles including the VUT and calculates theframe-number-ratio j of each vehicle and the average value as in (3). If the ratiois less or larger than the preset threshold ∆s, the VUT is considered as abnormal.

j(VUT) =# of frames of VUT

average # of frames amongst the neighboring 20 vehicles(3)

Page 9: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

The YOLO and Kalman filter algorithms are integrated in MATLAB result-ing in a log file, which contains the raw dataset from the captured vehicles inthe surveillance video. The final results are processed using this dataset. Fig. 2shows the normal distribution and frequency histogram results for each trafficdirection (referring to Fig. 1 Segments I and II) on roads. From the figure, it isnoticeable that most of vehicles are driving at a similar speed while the outliersare the most extreme parts of the curve. In fact, the detected outliers mightnot be recognized directly by human eyes, thus after validation, the proposedmethod could be evaluated based on the actual examples of abnormal vehiclebehavior.

5 Discussion

The paper presents an initial study of detecting abnormal driving behavior usinglive video streams. The large variance in road, traffic, weather, sunlight, andother conditions, such as rush hour, heavy rain, etc., may affect the difference indriving speeds of the vehicles on the same road segment. By comparing the speeddifferences between nearby vehicles under the same conditions during the sametime period, we can detect vehicles with abnormal behavior more effectively andconvincingly.

The proposed framework successfully detects all stalled vehicles in an actualvideo from a traffic camera. Note that stalled vehicles represent an extreme caseof abnormal behavior, while speeding or slow vehicles are more difficult to detect.

The proposed model uses the number of frames in which vehicles appearfor anomaly detection. Specifically, the number of frames in which each vehicleappears is compared with its nearby cars, both before and after it. Thus, themodel is sensitive to the accuracy of the number of frames computed for eachvehicle’s track.

However, even the state-of-the-art method for object detection may generateerroneous vehicle locations. This is possible due to a number of reasons, includingthe occlusion of a smaller car by a larger vehicle, e.g., a truck. This results inerroneous detection of vehicles by the object tracking algorithm, such that avehicle’s track may suddenly begin in the middle of a road. Similarly, a vehicle’strack may end abruptly in the middle of a road due to occlusion.

This problem cannot be resolved by a Kalman filter as it may only detectthe missing intermediate locations but not the prior locations. In other words,the vehicle tracking component treats such erroneous tracks as valid. Hence,additional contextual knowledge can be used to filter out such erroneous tracksthat either begin or end at an invalid location in the FOV of a traffic camera.

Note that the detection of stalled vehicles is not sensitive to the accuracy ofthe number of frames as their tracks are significantly longer than the remainingtraffic, such that their detection is robust.

Page 10: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

6 Conclusion and Future Work

In this paper we propose and evaluate a framework for detecting abnormal ve-hicle behavior using traffic camera. The proposed approach uses the number offrames, in which a vehicle appears, as a relative measure of the driving behav-ior. The framework uses a state-of-the-art method for object detection in videoframes, then tracks vehicle location through successive frames using a Kalmanfilter, and finally performs anomaly detection using the number of frames, inwhich a vehicle appears, as a relative measure of its speed. The proposed ap-proach is applied to detect all stalled vehicles in an actual traffic video stream. Inaddition, several improvements are suggested to detect other abnormal vehiclebehavior, including fast and slow cars.

Based on the presented study, the method of detecting abnormal vehiclebehavior using YOLO and Kalman filter is feasible and applicable, while thepreset threshold is able to determine whether the vehicle-of-interest is speedingor not. In future work, the authors will be focusing on detecting other abnormalbehaviors such as sudden acceleration and deceleration, swerving, and suddenlane changes. In addition, a comprehensive evaluation of the proposed modelfor such behaviors will be performed. Finally, a comparison between SSD andYOLO will be made to show which single-shot model performs better.

References

[1] Zainab Abbas et al. “Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks”. In: 7th IEEE International Congress onBig Data, BigData Congress 2018, 2 July 2018 through 7 July 2018. IEEE.2018, pp. 57–65.

[2] C.S. Asha and A.V. Narasimhadhan. “Vehicle Counting for Traffic Man-agement System using YOLO and Correlation Filter”. In: vol. 1. Mar.2018, pp. 1–6.

[3] Anan Banharnsakun and Supannee Tanathong. “A hierarchical clusteringof features approach for vehicle tracking in traffic environments”. In: Int.J. Intelligent Computing and Cybernetics 9.4 (2016), pp. 354–368.

[4] M. Brenan. 83% of U.S. Adults Drive Frequently: Fewer Enjoy It a Lot.https://news.gallup.com/poll/236813/adults-drive-frequently-

fewer-enjoy-lot.aspx. July 2018.[5] Y. Cai et al. “Trajectory-based anomalous behaviour detection for intelli-

gent traffic surveillance”. In: IET Intelligent Transport Systems 9 (8 Feb.2015), pp. 810–816.

[6] Dorin Comaniciu, Visvanathan Ramesh, and Peter Meer. “Kernel-BasedObject Tracking”. In: IEEE Trans. Pattern Anal. Mach. Intell. 25.5 (2003),pp. 564–575.

[7] N. Dalal and B. Triggs. “Histograms of Oriented Gradients for HumanDetection”. In: vol. 1. June 2005, pp. 886–893.

Page 11: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

[8] Sokemi Rene Emmanuel Datondji et al. “A Survey of Vision-Based TrafficMonitoring of Road Intersections”. In: IEEE Transactions on IntelligentTransportation Systems 17.10 (2016), pp. 2681–2698.

[9] Ross B. Girshick. “Fast R-CNN”. In: 2015 IEEE International Conferenceon Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015.2015, pp. 1440–1448.

[10] Ross B. Girshick et al. “Rich Feature Hierarchies for Accurate Object De-tection and Semantic Segmentation”. In: 2014 IEEE Conference on Com-puter Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA,June 23-28, 2014. 2014, pp. 580–587.

[11] Jan Hendrik Hosang et al. “What Makes for Effective Detection Propos-als?” In: IEEE Trans. Pattern Anal. Mach. Intell. 38.4 (2016), pp. 814–830.

[12] L.M. Howden and J.A. Meyer. “Contacts Between Police and the Public,2010”. In: 2010 Census Briefs (May 2011), pp. 1–16.

[13] Weiming Hu et al. “Traffic accident prediction using 3-D model-based vehi-cle tracking”. In: IEEE Trans. Vehicular Technology 53.3 (2004), pp. 677–694.

[14] S. Kamijo et al. “Traffic monitoring and accident detection at intersec-tions”. In: IEEE Transactions on Intelligent Transportation Systems 1 (2June 2000), pp. 108–118.

[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. “ImageNet clas-sification with deep convolutional neural networks”. In: Commun. ACM60.6 (2017), pp. 84–90.

[16] Theocharis Kyriacou, Guido Bugmann, and Stanislao Lauria. “Vision-based urban navigation procedures for verbally instructed robots”. In:Robotics and Autonomous Systems 51.1 (2005), pp. 69–80.

[17] Chengyou Li and Tao Hua. “Human action recognition based on templatematching”. In: Procedia Engineering 15 (2011), pp. 2824–2830.

[18] X. Li et al. “Temporal Outlier Detection in Vehicle Traffic Data”. In: Mar.2009, pp. 1319–1322.

[19] Wei Liu et al. “SSD: Single Shot MultiBox Detector”. In: Computer Vision- ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands,October 11-14, 2016, Proceedings, Part I. 2016, pp. 21–37.

[20] Y. Lv et al. “Traffic Flow Prediction With Big Data: A Deep LearningApproach”. In: IEEE Transactions on Intelligent Transportation Systems16.2 (Apr. 2015), pp. 865–873.

[21] MDOTtraffic — Powered by MDOT. https://www.mdottraffic.com/.(Accessed on 03/16/2019).

[22] Motion-Based Multiple Object Tracking - MATLAB & Simulink. https://www.mathworks.com/help/vision/examples/motion-based-multiple-

object-tracking.html. (Accessed on 03/22/2019).[23] Joseph Redmon et al. “You Only Look Once: Unified, Real-Time Object

Detection”. In: 2016 IEEE Conference on Computer Vision and Pattern

Page 12: Towards Detection of Abnormal Vehicle Behavior Using Tra c …aibek.cs.ua.edu/files/Wang_bigdata19.pdf · 2019-07-18 · Step 2: vehicle tracking using detected vehicles Step 3: tra

Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. 2016,pp. 779–788.

[24] Shaoqing Ren et al. “Faster R-CNN: Towards Real-Time Object Detectionwith Region Proposal Networks”. In: IEEE Trans. Pattern Anal. Mach.Intell. 39.6 (2017), pp. 1137–1149.

[25] S. Ren et al. “Faster R-CNN: Towards Real-Time Object Detection withRegion Proposal Networks”. In: IEEE Transactions on Pattern Analysisand Machine Intelligence 39 (6 June 2017), pp. 1137–1149.

[26] Road Safety Facts - Association for Safe International Road Travel. https://www.asirt.org/safe-travel/road-safety-facts/. (Accessed on03/15/2019).

[27] Chalermpol Saiprasert and Wasan Pattara-Atikom. “Smartphone EnabledDangerous Driving Report System”. In: 46th Hawaii International Con-ference on System Sciences, HICSS 2013, Wailea, HI, USA, January 7-10,2013. 2013, pp. 1231–1237.

[28] S. Sivaraman, B. Morris, and M. Trivedi. “Learning multi-lane trajectoriesusing vehicle-based vision”. In: Nov. 2011, pp. 2070–2076.

[29] Ihor Smal et al. “Multiple object tracking in molecular bioimaging by Rao-Blackwellized marginal particle filtering”. In: Medical Image Analysis 12.6(2008), pp. 764–777.

[30] Unnikrishnan Kizhakkemadam Sreekumar et al. “Real-Time Traffic Pat-tern Collection and Analysis Model for Intelligent Traffic Intersection”. In:2018 IEEE International Conference on Edge Computing (EDGE). IEEE.2018, pp. 140–143.

[31] The Results of Our Nationwide Analysis of Traffic Stops and Searches.https://openpolicing.stanford.edu/findings/.

[32] L. Xie et al. “A New CNN-Based Method for Multi-Directional Car Li-cense Plate Detection”. In: IEEE Transactions on Intelligent Transporta-tion Systems 19 (2 Feb. 2018), pp. 507–517.

[33] Wen Yao et al. “On-Road Vehicle Trajectory Collection and Scene-BasedLane Change Analysis: Part II”. In: IEEE Transactions on IntelligentTransportation Systems 18.1 (2017), pp. 206–220.

[34] Alper Yilmaz, Omar Javed, and Mubarak Shah. “Object tracking: A sur-vey”. In: ACM Comput. Surv. 38.4 (2006), p. 13.

[35] Alper Yilmaz, Xin Li, and Mubarak Shah. “Contour-Based Object Track-ing with Occlusion Handling in Video Acquired Using Mobile Cameras”.In: IEEE Trans. Pattern Anal. Mach. Intell. 26.11 (2004), pp. 1531–1536.

[36] Shimin Yin et al. “Hierarchical Kalman-particle filter with adaptation tomotion changes for object tracking”. In: Computer Vision and Image Un-derstanding 115.6 (2011), pp. 885–900.