[ieee 2009 12th international ieee conference on intelligent transportation systems (itsc) - st....

6
Vehicle Behavior Understanding Based on Movement String JiuYue Hao, Sheng Hao, Chao Li, Zhang Xiong, Ejaz Hussain School of Computer Science and Engineering Beihang University Beijing, China [email protected], [email protected] , [email protected], [email protected], [email protected] Abstract - This paper proposes an analysis method based on Movement string for behavior understanding. Trajectories are analyzed by the improved principal component analysis (PCA) method which introduces the trajectory location and direction. Trajectory location and direction are the main features of PCA for scene division and Gaussian Mixture Hidden Markov Model. With the help of these two features, we can recognize action and can identify abnormal event. Movement string is defined and analyzed to get the semantic feature of vehicle. From the four rules presented in the paper, we can infer the behavior and describe it by the natural language. Finally, through some experiments, we picked the best initial parameters of HMM for training purpose. Further we put experiments on actual scene and found the recognition rate 88.6%. Results authenticate the accuracy of behavior understanding. Keywords—Behavior understanding; Movement String; PCA; HMM I. Introduction Vehicle behavior understanding in intelligent surveillance is currently one of the most active research topics in computer vision. It is to analyze and recognize object motion patterns, and to produce high-level description. Behavior understanding concerns motion information extraciont, scene analysis and activity recognition as well as behavior understanding and semantic description. Information extract and scene analysis are the fundamental and critical tasks in behavior understanding. Trajectory is the most directly motion feature for activity description which includes vehicle location and direction information. Vehicle motion trajectory is limited by scene knowledge with spatial context, such as “a car is turning left”, “object goes straight” and “a car moves against the rules of traffic”. So through analyzing trajectories, we learn typical activity patterns from training samples and obtain semantic regions for activity perception. Based on trajectory analysis, we propose a new method to describe the behavior of vehicle. At present, a number of methods have been proposed for trajectory analysis [1] . Sheng et al. [2-3] developed a highway surveillance system for detecting abnormal vehicle events; however, they paid no attention for the normal event classifier, also as highway scene is just unidirectional so it has the less complexity. Hu et al. [4] encoded trajectory, and built a hierarchical self-organizing neural network model to learn trajectory distribution patterns for event recognitions and object behavior prediction. Shehzad et al. [5] extracted the Fourier coefficient feature space to discover patterns of similar object motions using Discrete Fourier Transform and they used self-organizing map to learn similarity between objects. Wang et al. [6] proposed a novel similarity measure which compared the spatial distribution of trajectories and other attributes, such as velocity and object size. They used statistics to estimate the density and distributions of velocity direction. However object moving may be thought as a time varying data. The statistical model built by Wang overlooked the temporal variability. Activity pattern of object can be recognized by trajectory analysis methods described above. You can judge whether the motion of the object accords with the scene or has the abnormal pattern. Main difficulty of transforming video images into textual descriptions is how to bridge a semantic gap between object activity patterns with behavior description. In this paper, we proposed movement string by combining HMM. We defined five actions; parking, slow-speed, medium-speed, high-speed and exceed-speed to describe the movement of vehicle in each HMM state. Through state transfer of object behaviors, we can also infer the detail information about vehicle, such as accelerating, decelerating, and braking. An outline of the proposed method is shown in Fig.1. Firstly, the trajectory is encoded into a sequence of flow vectors which represents the position and velocity of object. According to different position and velocity angle, a new similarity measure based on PCA is introduced. Secondly, the hierarchical clustering algorithm is applied to divide scene regions. Thirdly, based on clusters, we trained trajectory samples with Mixture of Gaussian Hidden Markov Model to get analysis models. According to models, the trajectory can be recognized and abnormal events can be detected. Proceedings of the 12th International IEEE Conference on Intelligent Transportation Systems, St. Louis, MO, USA, October 3-7, 2009 MoDT4.3 243 978-1-4244-5521-8/09/$26.00 ©2009 IEEE

Upload: ejaz

Post on 13-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2009 12th International IEEE Conference on Intelligent Transportation Systems (ITSC) - St. Louis (2009.10.4-2009.10.7)] 2009 12th International IEEE Conference on Intelligent

Vehicle Behavior Understanding Based on Movement String

JiuYue Hao, Sheng Hao, Chao Li, Zhang Xiong, Ejaz Hussain

School of Computer Science and Engineering

Beihang University

Beijing, China

[email protected], [email protected], [email protected], [email protected], [email protected]

Abstract - This paper proposes an analysis method based on Movement string for behavior understanding. Trajectories are analyzed by the improved principal component analysis (PCA) method which introduces the trajectory location and direction. Trajectory location and direction are the main features of PCA for scene division and Gaussian Mixture Hidden Markov Model. With the help of these two features, we can recognize action and can identify abnormal event. Movement string is defined and analyzed to get the semantic feature of vehicle. From the four rules presented in the paper, we can infer the behavior and describe it by the natural language. Finally, through some experiments, we picked the best initial parameters of HMM for training purpose. Further we put experiments on actual scene and found the recognition rate 88.6%. Results authenticate the accuracy of behavior understanding.

Keywords—Behavior understanding; Movement String; PCA; HMM

I. Introduction

Vehicle behavior understanding in intelligent surveillance is currently one of the most active research topics in computer vision. It is to analyze and recognize object motion patterns, and to produce high-level description. Behavior understanding concerns motion information extraciont, scene analysis and activity recognition as well as behavior understanding and semantic description. Information extract and scene analysis are the fundamental and critical tasks in behavior understanding. Trajectory is the most directly motion feature for activity description which includes vehicle location and direction information. Vehicle motion trajectory is limited by scene knowledge with spatial context, such as “a car is turning left”, “object goes straight” and “a car moves against the rules of traffic”. So through analyzing trajectories, we learn typical activity patterns from training samples and obtain semantic regions for activity perception. Based on trajectory analysis, we propose a new method to describe the behavior of vehicle.

At present, a number of methods have been proposed for trajectory analysis [1]. Sheng et al. [2-3] developed a highway surveillance system for

detecting abnormal vehicle events; however, they paid no attention for the normal event classifier, also as highway scene is just unidirectional so it has the less complexity. Hu et al.

[4] encoded trajectory,

and built a hierarchical self-organizing neural network model to learn trajectory distribution patterns for event recognitions and object behavior prediction. Shehzad et al.

[5 ] extracted the Fourier

coefficient feature space to discover patterns of similar object motions using Discrete Fourier Transform and they used self-organizing map to learn similarity between objects. Wang et al.

[6]

proposed a novel similarity measure which compared the spatial distribution of trajectories and other attributes, such as velocity and object size. They used statistics to estimate the density and distributions of velocity direction. However object moving may be thought as a time varying data. The statistical model built by Wang overlooked the temporal variability.

Activity pattern of object can be recognized by trajectory analysis methods described above. You can judge whether the motion of the object accords with the scene or has the abnormal pattern. Main difficulty of transforming video images into textual descriptions is how to bridge a semantic gap between object activity patterns with behavior description. In this paper, we proposed movement string by combining HMM. We defined five actions; parking, slow-speed, medium-speed, high-speed and exceed-speed to describe the movement of vehicle in each HMM state. Through state transfer of object behaviors, we can also infer the detail information about vehicle, such as accelerating, decelerating, and braking.

An outline of the proposed method is shown in Fig.1. Firstly, the trajectory is encoded into a sequence of flow vectors which represents the position and velocity of object. According to different position and velocity angle, a new similarity measure based on PCA is introduced. Secondly, the hierarchical clustering algorithm is applied to divide scene regions. Thirdly, based on clusters, we trained trajectory samples with Mixture of Gaussian Hidden Markov Model to get analysis models. According to models, the trajectory can be recognized and abnormal events can be detected.

Proceedings of the 12th International IEEE Conferenceon Intelligent Transportation Systems, St. Louis, MO,USA, October 3-7, 2009

MoDT4.3

243978-1-4244-5521-8/09/$26.00 ©2009 IEEE

Page 2: [IEEE 2009 12th International IEEE Conference on Intelligent Transportation Systems (ITSC) - St. Louis (2009.10.4-2009.10.7)] 2009 12th International IEEE Conference on Intelligent

Finally, we showed every trajectory can be represented as a movement string which helps us to understand vehicle behavior.

Fig 1: Algorithm flow chart

II. Trajectory Similarity Measure

In outdoor traffic surveillance scenes, many applications depend on analyzing motion trajectory. Recent advances in object tracking also made it possible to do data mining in a large trajectories set. Cluster analysis is a common mining method. For similarity based clustering, a key issue is how to measure the similarity between two trajectories. Zhang

[7] summarized methods to measure

trajectory similarity for different applications, including Euclidean, PCA (principal component analysis) +Euclidean, Hausdorff distance, DTW (Dynamic Time Warping), LCSS (Longest common subsequence). Euclidean distance, ||[p]-[q]|| compares and computes every point of one trajectory to the corresponding point of other trajectory of similar length. For small numbers of trajectories, it works fine but for a large data it is much time consuming. Next, PCA +Euclidean was proposed to improve the time cost. The only drawback is, during re-sampling we lost the speed information. Hausdroff distance can deal with different length of trajectory, but this method pays for largest time cost. DTW and LCSS are better to measure the shape similarity in sign dataset. In outdoor surveillance scenes the difference between two clusters mainly arises from different position, so PCA +Euclidean distance produces better results with lower time cost. In this section, we proposed a method to deal with position and direction features independently to solve the problem of information loss.

A. Trajectory pre-process

Similarity measure needs to encode trajectory into a unity form. For long-term observation we can obtain many trajectories. The tracker processes

frames at fix rate t∆ , thus for an object O which

moves during n frames, we have a sequence T:

1 1 2 2 1 1{( , ),( , ), , ( , ),( , )}n n n nT x y x y x y x y− −

= L (1)

Where (xi, y i) is the two-dimensional image coordinates of object O centroid at the i

th frame.

The vehicle instantaneous velocity at ith

frame is

presented as ( , )i i

x yδ δ , where iii xxx −= +1δ and

iii yyy −= +1δ . The velocity angle is computed as

follow:

arctan 0 & 0

0 & 02

arctan 0

30 & 0

2

arctan 2 0

ii i

i

i i

ii i

i

i i

ii

i

yx y

x

x y

yx

x

x y

yx

x

δ δ δδ

π δ δ

δθ π δδ

π δ δ

δ π δδ

> ≥ = ≥

= + < = ≤

+ > & 0iyδ

<

(2)

A trajectory is described by a sequence of flow vectors where a flow vector fi represents both position of object and its direction as

( , , )i i i if x y θ= (3)

Thus an object trajectory can be represented asoT .

1 2 1{ , , , , , }

o i n nT f f f f f

−= L L (4)

Due to tracking algorithm or obstacles in a scene, trajectories can be shown as fragmented ones and may contain some noise points. From the length of trajectory and angle of a point, we can remove these noise points to enhance the accuracy of clustering. The length of trajectory can be calculated as

( ) ( )1 2 2

1 11( )

n

o k k k kkL T x x y y

+ +== − + −∑ (5)

Here, if1

2 ( )( )

3

m

kko

L TL T

m

=≤ ∑ , it shows that the

trajectory is incomplete and it can not be a part of sample data base thus we delete it. Here m is the number of trajectories in the database.

Also, to avoid noise interference, we compare

angles of adjacent flow vectors, 1 1, ,i i iθ θ θ− + .

If 1

3

2 2i i

π θ θ π−

≤ − ≤ , it shows that the change of

vehicle angle is greater than 2

π and it is marked as

a noise point. We update the flow vector if as

follow: 1 1

2

i ii

x xx − ++= , 1 1

2

i ii

y yy − ++= and compute

iθ by (2).

Finally all trajectories are transformed so that each component lies in the interval [0, 1].

B. Similarity measure

PCA is widely used for building the predictive models as well as for data analysis. The central idea

[8] is to reduce the dimensionality of a correlated

data set, while retaining as much of variation as possible. Mathematically, we can define it as orthogonally linear transformation of a matrix and we choose eigenvector corresponding to its rth largest eigenvalue. For the efficient computation of PCs, we use Singular Value Decomposition (SVD).

Trajectories are smoothed and re-sampled by the

244

Page 3: [IEEE 2009 12th International IEEE Conference on Intelligent Transportation Systems (ITSC) - St. Louis (2009.10.4-2009.10.7)] 2009 12th International IEEE Conference on Intelligent

method mentioned in section-II(A). Let there be M trajectories to be indexed, and N points in each one.

The M N× size matrices , ,x yS S Sθ are

respectively formed with each row having one

trajectory x-, y-,θ . Matrix S is then defined as:

1 2 3 3, ,x y M N

S S S Sθω ω ω×

= (6)

Where 1 2 3, ,ω ω ω are weighted values.

PCA computation generates r-dimensional PCA

coefficients using the transformation: T

rP W S= ,

where rW is formed by eigenvector. The indexing

structure then consists of M r× size

matrix 1 2( , ,..., )MP p p p= , where 3r N<< .

Finally, the distance between iT and jT is

computed by the Euclidean distance. ij i j i

e p p= − .

III. Trajectory clustering

Trajectory clustering algorithm divides scene into semantic regions. Trajectories which are spatially close and have similar velocities of motion represent one type of activity pattern. Our proposed similarity measure doesn’t limit the clustering algorithm. Any of the methods like Piciarelli et al.

[9]

were based on single-class support vector machine clustering and other discussed in section-I can be used. In this paper, we applied a hierarchical clustering algorithm which does not require any knowing for the number of class to cluster trajectories. The clustering result from scene S1 is shown in Fig.2 and Fig.3. Here vehicle trajectories are clustered into six clusters. We choose,

1 2 31, 1, 3ω ω ω= = = and the dimension 3r = .

Fig.2 The vector distribution in coordinate axes corresponds to six trajectory clusters in true scene S1. It shows that, by using improved PCA similarity measure, we can extract exact main features to represent each scene region.

Fig.2 Vector clustering result

Fig.3 Trajectory clustering result

IV. HMM trajectory analysis

HMM is best known for the applications of temporal patterns recognition, therefore we incorporated the HMM for vehicle movement as the vehicle movement falls under temporal continuity

domain. Faisal et al. [10] proposed HMM-based scheme for sign language classification and recognition. For vehicle trajectory, we trained HMM to model trajectories and compute maximum likelihood to recognize trajectories and sort out the abnormal events. In this section, we have focused our attentions to discuss the choices of HMM initial parameters. Object moving is irreversible continuous process, thus the best choice left-right model as Fig.4 is used.

Fig 4�HMM left-right model

An HMM is characterized by ( ), ,A Bλ π= :

1) Initial states distribution1 2{ , ......, }

Nπ π π π= ;

2) The state transition matrix,

{ }ij N NA a ×= , where 1≤i, j≤N;

3) The observation symbol probability distribution

in state j, { }jk N M

B b ×= , 1≤ i ≤N, 1≤ k ≤M;

Here N is the number of states and M is the number of distinct observation symbols per state. As shown in Fig 4, every trajectory starts at state 1,

so we defined1[1,0,..., 0]

Nπ Τ

×= . We choose state

transition matrix-A using average probability.

We presented a Gaussian mixture distribution, called “GM”, to describe the state-conditional probability distribution. These component densities are combined to provide a multimodal density. Considering the number of samples and trajectory distribution, we choose M=2 to fit trajectory distribution.

An EM-type Baum-Welsh algorithm is used to learn the unknown HMM parameters. When the observed data applied to the algorithm, it performs an iterative computation of maximum likelihood

245

Page 4: [IEEE 2009 12th International IEEE Conference on Intelligent Transportation Systems (ITSC) - St. Louis (2009.10.4-2009.10.7)] 2009 12th International IEEE Conference on Intelligent

estimation. The aim of parameter learning is to find

the model parameter λ which maximizes

arg max(log[ ( | )])p Oλ λ= for a given set O of

observed data. After setting the initial value of λ ,

the parameter estimation process repeats two parts, preliminaries and update rules until the

log[ ( | )]p O λ reaches to a local maximum.

Trajectories recognition is to select the best description of a sequence from the HMM set

1 2{ , ,..., }nλ λ λ λ= trained by EM algorithm. We

calculate the conditional probability of a test sample using forward-backward algorithm, finds the largest value as a result of recognition. When all conditional probability values are –∞, this trajectory is judged to be an abnormal.

In the training stage, the number of states of HMM must be specified, as it is very sensitive to the number of states. The traditional left-right HMM model usually divides the trajectory into 3-6 states. We have separately performed HMM training using 13 turning left trajectories and 16 straight trajectories with the sequence length of 30 each. According to the test performed, we figure out that when we have 4 states in each trajectory, the recognition rate may reach upto max for both set of trajectories. Test results are shown in the table 1. But here, if N is small, the state can’t represent the complex trajectory and if N is too large, then points in each state are not sufficient to fulfill the training purpose. In either case we can not get the max recognition rate.

Table 1 Recognition Rate in different states

Recognition Rate No. of States

turn straight

N=3 80.0% 77.8%

N=4 90.0% 100.0%

N=5 90.0% 94.4%

N=6 75.0% 88.9%

V. Behavior understanding

HMM trajectory analysis can recognize activity pattern. In general, there is a semantic gap between trajectory information directly obtained from video and conceptual information contained in natural language. In this section, to describe a vehicle activity, we propose the concept of movement string. Each HMM state is represented by a symbol, and the number of points in each state are recorded. Natural language inherently has various concepts of actions and events. So next, we will define and analyze movement string, and will tag each state to any of the five actions for representing object behavior in each state. Through the transfer of object behaviors, we can also figure out the detail of events about vehicle. The events can be uniform motion, accelerating, decelerating, and braking.

Definition 5.1 Movement string:

A string represents the trajectory state transfer

which is defined as: 1 2

1 2{ ... }knn n

kx x x ,

Here x represents the HMM states in the

trajectory and k represents the no of states. Each

state can have many points therefore, for any ix

state, in are the number of points in that state.

Total number of points in all states or in the

trajectory can be expressed as 1 2

...k

N n n n= + + +

Definition 5.2 Behavior description symbol:

We denoted the various terms as symbols: parking, slow speed, medium speed, high speed and exceed-speed with P, S, M, H and E respectively. When the vehicle does a uniform motion in the scene, we can get an average number of points in each state and this can be represented as

Nnk

=

Now at this point, we can establish an important relationship between behavior understanding and movement string on the basis of experimental results tabulated in Table 2.

Table 2 Relationship between behavior and movement string

Description Symbol Points Range

Exceed-speed (E) )0, 3n L −

High-speed (H) )3 ,n L n L − −

Medium-speed (M) ,n L n L − +

Slow-speed (S) ( , 3n L n L+ +

Parking (P) ( 3 ,n L N +

Where 5

nL = .

According to the data provided in the table 2, we can infer the descriptive behavior of any trajectory. Let we take an example of four states trajectory

with movement string10 8 7 5{ }a b c d from one of the

scene models. Here k=4, N=30, 7n = and 1L = .

According to the relationship, the first state 10{ }a

falls in the slow-speed domain. Second and third

states 8 7{ }b c have a place in medium-speed domain.

And the last 5{ }d has high-speed domain. Thus,

descript of the behavior can be represented as

sequence { }S M M H→ → → .

Now, we have the behavior sequence and from this sequence we can infer another aspect of vehicle events. This includes uniform moving, accelerating, decelerating and braking. Explanation is presented below.

246

Page 5: [IEEE 2009 12th International IEEE Conference on Intelligent Transportation Systems (ITSC) - St. Louis (2009.10.4-2009.10.7)] 2009 12th International IEEE Conference on Intelligent

Supposition { }1, 2 , , , ,s s P S M H E∈ , and

P S M H E< < < <

Rule 1: [ ]1 2 uniform movings s= ⇒

Rule 2���� [ ]1 2s s< ⇒ �����������

Rule 3: [ ]1 2s s> ⇒ ����������

Rule 4: { }1 , , , & 2s S M H E s P∀ ∈ = ⇒ �������

From these four rules, we can understand the event of vehicle in traffic scenes. The inference process

of sequence { }S M M H→ → → is shown in Fig

6.

Fig.6 Event inference process

VI. Experimental Results

In this paper, we used the mean-shift tracker described in [11]. The tracking program processes 5 frames per second. Moving vehicles are tracked in the image system to obtain a series of trajectories. After pre-processing, we acquire 69 trajectories in scene S1 as training samples for clustering and constructing HMMs scene models. Scene S1 includes a typical traffic region with the difficulty to divide the region is as under. First, it has many different roads. Secondly, two roads have reciprocal intersects. Lastly, opposite direction roads region are close to each other putting more difficulty to divide the regions.

Experiment 1: Similarity measure and clustering

Using trajectory similarity measure and hierarchical clustering algorithm, we obtain the scene regions of S1 as shown in Fig 3. We compared our similarity measure with PCA+ Euclidean, Hausdorff distance and improved Hausdorff distance [6]. Comparison results are shown in the Table 3. PCA method has the lowest time but suffers accuracy. Performance of our method is at the top considering the precision factor but has little more time than PCA.

Table 3: clustering result with similarity measure

Ours PCA Hausdorff [6]

Precision (%) 100 71.4 86.9 98.5

Time(second) 93.70 60.53 100.363 180.09

Experiment 2: Recognizing trajectory and identifying abnormal event

According to the result of scene region model, we build 6 trajectory analysis models for S1. Building

process takes HMM and initial parameter λ

discussed in section-IV as input. The sequence

lengths are 30 for S1.

For recognition stage, we get more flow vectors until the whole trajectory. If the whole trajectory is match for a unique trajectory analysis model, the motion of vehicle is flagged as one type of activity pattern, such as a car turn left or moving straight from east to west. We recognized 123 trajectories in S1and compare our method with [6]. The method [6] falls under the domain where trajectory is analyzed by every point. The advance version can deal a real time trajectory. The recognition rates are reported in Table 4. HMM average rate reaches to 88.6% and [6] recognition rate reaches to 77.6%.

Table 4: the recognition rate of S1

Cluster

C1 C2 C3 C4 C5 C6

Training samples

11 16 13 10 12 8

Test samples

18 19 20 26 22 2

1

HMM Recognition

Rate (%) 100 89.4 90.0 84.6 86.4

81.0

[6] Recognition

Rate (%) 94.4 89.4 85.0 61.5 54.5

81.0

From Table 4, we see the rate of [6] suddenly declines at clusters C4 and C5. From Fig.8 (a), at the start of C4 and C5, it is very difficult to judge whether trajectory belongs to C4 or C5. Recognition rate on the basis of [6] can’t solve this type of situation but our method analyzes the trajectory as a whole which can apply more widely.

If a trajectory does not match any models, we define it as an abnormal event. As shown in Fig.7, the green car turned right which is not allowed in S1.

Fig 7: abnormal event detection result

Experiment 3: Behavior understanding

For behavior understanding of the objects, we have conducted an experiment on six trajectories with scene model S1. Each trajectory with four

states gives us the data as k=4, 7n = and 1L = , the

relationship between five actions and movement string can be computed with the help of Table 2. According to four rules, we can describe the vehicle

247

Page 6: [IEEE 2009 12th International IEEE Conference on Intelligent Transportation Systems (ITSC) - St. Louis (2009.10.4-2009.10.7)] 2009 12th International IEEE Conference on Intelligent

movement by natural language. Also, if the resultant of the two events or states falls under the same domain, we merge the two actions into one for better understanding. Table 6 shows the results of these six trajectories. The translation process from behavior to natural language is self explanatory.

From Table 6, movement string can clearly represent the information of each state. Through inferring process, these descriptions of vehicle behaviors are matched with the judgment of human vision. However, because of the limitation of trajectory length, when the vehicle speed is slow, we get error in understanding at last state. For example, the action in T3 is suddenly jumps to exceed-speed (E); one reason is that we did not get enough trajectory points to analyze at last state.

VII. Conclusions

We have proposed a method for vehicle trajectory analysis and behavior understanding in traffic intelligence surveillance. The trajectory similarity is measured by considering position and velocity angle separately by using PCA. The main advantage is that we can overcome the problem of losing velocity information in a less time. A hierarchical clustering algorithm successfully divides a whole traffic scene into many regions and by using HMM; we can get analysis models of scene regions. This method can be successfully employed on activity recognition and abnormal events detection. At last, a trajectory is represented as a movement string, and object behavior is inferred by the rules defined in the paper. Experiments results demonstrate that this method can be easily and widely applied to real practical system.

Table 6: Results of behavior understanding

Trajectory

models Movement string Action transition Meaning

T1 C1 11 8 6 5{ }a b c d { }P M M H→ → →

First, accelerating, second, uniform moving; third, accelerating

T2 C2 3 5 9 13{ }a b c d { }E H S P→ → → First, decelerating, second, braking

T3 C3 10 10 9 1{ }a b c d { }S S S E→ → →

First, uniform moving; second, accelerating

T4 C4 7 10 8 5{ }a b c d { }M S M H→ → →

First, accelerating; second, decelerating; third, accelerating

T5 C5 15 2 1 12{ }a b c d { }P E E P→ → →

First, accelerating; second, uniform moving; third, braking

T6 C6 5 5 3 17{ }a b c d { }H H E P→ → →

First, uniform moving; second, accelerating; third, braking

REFERENCES

[1] Morris BT, Trivedi MM. A survey of vision-based trajectory learning and analysis for surveillance. IEEE Trans.on Circuits and systems for video technology, 2008, 18(8): 1114-11127

[2] Sheng H, Xiong Z, Weng JN, etc. An approach to detecting abnormal vehicle events in complex factors over highway surveillance video. [J]. Science in China Series E, 2008, 51:199-208.

[3] Sheng H, Xiong Z, Weng JN, etc. Real-time detection of abnormal vehicle events with Multi-Feature over highway surveillance video. ITSC 2008, 2008, 550-556.

[4] Hu W M, Xie D, Tan T N. A Hierarchical Self-Organizing approach for learning the patterns of motion trajectories. IEEE Transactions on Neural Networks, 2004, 15 (1):135-144

[5] Khalid S�Nafterl A. Classifying spatiotemporal object trajectories using unsupervised learning of basis function coefficients. In: Aggarwal J K, Cucchiara R, Chang E, et al., eds. the 3rd ACM International Workshop on Video

Surveillance & Sensor Networks�NY: ACM, 2005: 45-51

[6] Wang X G, Tieu K, Grimson E. Learning semantic scene models by trajectory analysis. In: Pinz A, eds. the European Conference on Computer Vision (ECCV). Berlin: Springer, 2006:110-123

[7] Zhang Z, Huang K, Tan T N. Comparison of similarity measures for trajectory clustering in outdoor surveillance scenes. In: Tang Y Y, eds. ICPR’06. Hong Kong: IEEE press, 2006, vol. (3): 1135-1138

[8] Bashir F, Khokhar A and Schonfeld D, Segmented trajectory based indexing and retrieval of video data. In:

Intl. Conf. On Image Processing (ICIP’03). 2003, vol 2: pp623-626

[9] Piciarelli C, Micheloni C, Foresti G L. Trajectory-based anomalous event detection. IEEE Transactions on: Circuits and system for video Technology, 2008, 18 (9)

[10] Bashir F, Khokhar A and Schonfeld D, Object trajectory-based activity classification and recognition using Hidden Markov Models. IEEE Trans. on Image processing. 2007, 16 (7): 1912-1919

[11] Veeraraghavanm H, Papanikolopoulos N. Combining multiple tracking modalities for vehicle tracking at traffic intersections In: Tarn T J, eds. International Conference on Robotics and Automation (ICRA). New Orleans, LA,USA:IEEE, 2004, 3: 2303~2308

248