track moving object

8
A Novel approach to Detect and Track Moving Object using Partitioning and Normalized Cross Correlation Manoj S. Nagmode, Mrs. Madhuri A. Joshi, Ashok M. Sapkal College of Engineering, Pune University, India [email protected], [email protected], [email protected] Abstract Research in motion analysis has evolved over the years as a challenging field, such as traffic monitoring, military, medicine and biological sciences etc. Detection and tracking of moving objects in video sequences can offer significant benefits to motion analysis. In this an approach is proposed for the detection and tracking of moving object in an image sequence. Two consecutive frames from image sequence are partitioned into four quadrants and then the Normalized Cross Correlation (NCC) is applied to each sub frame. The sub frame which has minimum value of NCC, indicates the presence of moving object. Next step is to identify the location of the moving object. Location of the moving object is obtained by performing component connected analysis and morphological processing. After that the centroid calculation is used to track the moving object. Number of experiments performed using indoor and outdoor image sequences. The results are compared with Simple Difference (SD) and Background Subtraction (BS) methods. The proposed algorithm gives better performance in terms of Detection Rate (DR) and processing time per frame. Keywords: Normalized Cross Correlation, moving object detection, Component Connectivity, Centroid, Tracking, Processing time, Detection Rate, False Alarm Rate. 1. Introduction Normalized cross correlation (NCC) algorithm is based on finding the cross correlation between two consecutive frames in an image sequence. Correlation is basically used to find the similarity between two frames. If the two consecutive frames are exactly same, then the value of Normalized cross correlation is maximum. In that case no moving object is detected. Now suppose there is a moving object in the image sequence, means the two consecutive frames are not exactly same, with respect to positions of the pixel values. In that case the value of Normalized cross correlation is less than maximum value obtained. This concept of Normalized cross correlation is used for the detection of moving object in an image sequence. The remainder of the paper is organized as follows: Section (2) provides literature survey. Section (3) highlights difficulties in moving object detection and tracking. Section (4) gives brief theory of NCC. Section (5) provides system overview. Algorithm is given in Section (6). Experimental results are presented in Section (7). Also comparison with the Simple Difference and Background Subtraction method is given in section (7). Section (8) discusses Results and concludes the paper. 2. Literature Survey Motion detection is one of the most important subjects in modern information acquisition systems for dynamic scenes. Robust object detection is a key technique in terms of understanding the environment and a step towards the intelligent vehicle. A brief review of major research work carried out in the field of Moving object detection and tracking is given below. Sang Hyun Kim [1] propose a moving edge extraction using the concept of entropy and cross entropy, in which the cross entropy is applied to dynamic scene analysis. The cross entropy concept provides enhancement of detection for the dynamically changed area. It combines the results of cross entropy in the difference picture (DP) with those of entropy in the current frame, so that it can effectively extract moving edges. It also proposes the moving edge extraction method by combining results of cross entropy and those of Laplacian of Gaussian (LoG). Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure and underwater sensing [2]. It presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model and background modeling. Daviest, et. al. [3] addressed the problem of detection and tracking of small, low contrast objects by using wavelet as well as Kalman filter, but it increases the processing time, as both wavelet decomposition and Kalman filtering is used for the detection and tracking of moving object. The main drawback of this method is that for longer wavelet filters the target becomes smeared over a larger region of the images and so locality of the target is lost. Therefore it is essential to develop time efficient algorithm. [4] Li-Qun Xu addresses primarily the issue of robustly detection ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009 49

Upload: avaneet-ranjan

Post on 26-Dec-2014

200 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: track moving object

A Novel approach to Detect and Track Moving Object using Partitioning and

Normalized Cross Correlation

Manoj S. Nagmode, Mrs. Madhuri A. Joshi, Ashok M. Sapkal College of Engineering, Pune University, India

[email protected], [email protected], [email protected]

Abstract Research in motion analysis has evolved over the years as a challenging field, such as traffic monitoring, military, medicine and biological sciences etc. Detection and tracking of moving objects in video sequences can offer significant benefits to motion analysis. In this an approach is proposed for the detection and tracking of moving object in an image sequence. Two consecutive frames from image sequence are partitioned into four quadrants and then the Normalized Cross Correlation (NCC) is applied to each sub frame. The sub frame which has minimum value of NCC, indicates the presence of moving object. Next step is to identify the location of the moving object. Location of the moving object is obtained by performing component connected analysis and morphological processing. After that the centroid calculation is used to track the moving object. Number of experiments performed using indoor and outdoor image sequences. The results are compared with Simple Difference (SD) and Background Subtraction (BS) methods. The proposed algorithm gives better performance in terms of Detection Rate (DR) and processing time per frame. Keywords: Normalized Cross Correlation, moving object detection, Component Connectivity, Centroid, Tracking, Processing time, Detection Rate, False Alarm Rate. 1. Introduction Normalized cross correlation (NCC) algorithm is based on finding the cross correlation between two consecutive frames in an image sequence. Correlation is basically used to find the similarity between two frames. If the two consecutive frames are exactly same, then the value of Normalized cross correlation is maximum. In that case no moving object is detected. Now suppose there is a moving object in the image sequence, means the two consecutive frames are not exactly same, with respect to positions of the pixel values. In that case the value of Normalized cross correlation is less than maximum value obtained. This concept of Normalized cross correlation is used for the detection of moving object in an image sequence. The remainder of the paper is organized as follows: Section (2) provides literature survey. Section (3) highlights difficulties in moving object detection and

tracking. Section (4) gives brief theory of NCC. Section (5) provides system overview. Algorithm is given in Section (6). Experimental results are presented in Section (7). Also comparison with the Simple Difference and Background Subtraction method is given in section (7). Section (8) discusses Results and concludes the paper. 2. Literature Survey Motion detection is one of the most important subjects in modern information acquisition systems for dynamic scenes. Robust object detection is a key technique in terms of understanding the environment and a step towards the intelligent vehicle. A brief review of major research work carried out in the field of Moving object detection and tracking is given below. Sang Hyun Kim [1] propose a moving edge extraction using the concept of entropy and cross entropy, in which the cross entropy is applied to dynamic scene analysis. The cross entropy concept provides enhancement of detection for the dynamically changed area. It combines the results of cross entropy in the difference picture (DP) with those of entropy in the current frame, so that it can effectively extract moving edges. It also proposes the moving edge extraction method by combining results of cross entropy and those of Laplacian of Gaussian (LoG). Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure and underwater sensing [2]. It presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model and background modeling. Daviest, et. al. [3] addressed the problem of detection and tracking of small, low contrast objects by using wavelet as well as Kalman filter, but it increases the processing time, as both wavelet decomposition and Kalman filtering is used for the detection and tracking of moving object. The main drawback of this method is that for longer wavelet filters the target becomes smeared over a larger region of the images and so locality of the target is lost. Therefore it is essential to develop time efficient algorithm. [4] Li-Qun Xu addresses primarily the issue of robustly detection

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

49

Page 2: track moving object

and tracking of multiple objects. In this, first morphological reconstruction is used to remove cast shadows/ highlights. After that a temporal template based robust tracking scheme is introduced. In [5] an algorithm is presented for detecting objects in a sequence of color images taken from a moving camera. The first step of the algorithm is the estimation of motion in the image plane. Instead of calculating optical flow, tracking single points, edges or regions over a sequence of images, it determine the motion of clusters, built by grouping of pixels in a color/position feature space. The second step is a motion based segmentation, where adjacent clusters with similar trajectories are combined to build object hypotheses. The application area is vision-based driving assistance. Object tracking in Video sequences is very important topic and has various applications in video compression, robot technology etc. Algorithm proposed by Yiwei Wang, et. al. [6] uses wavelet decomposition and multiresolution analysis to detect the moving object and dispersion calculation is used to track that object. But, it takes more processing time. The detection of oncoming vehicles in traffic scenes by using depth information is studied in [7]. The image sequences are captured by a pair of stereo cameras which are mounted in a test vehicle. The main difficulty is to build a system that runs in real time on a standard PC and performs accurate detection of vehicles even under unfavorable illumination and weather conditions. Measures to evaluate quantitatively the performance of video object segmentation and tracking methods without ground-truth (GT) segmentation maps are presented in [8]. The proposed measures are based on spatial differences of color and motion along the boundary of the estimated video object plane and temporal differences between the color histogram of the current object plane and its predecessors. They can be used to localize (spatially and/or temporally) regions where segmentation results are good or bad; and/or they can be combined to yield a single numerical measure to indicate the goodness of the boundary segmentation and tracking results over a sequence. In [9] a real-time video tracking and recognition system (VTAR) for video surveillance applications is presented. The system takes a video sequence as the input of a background learning module and outputs a statistical background model that describes the static parts of the scene. A time-adaptive and per-pixel min-max method is developed to compute the likelihood of the RGB values of a background pixel. Next, the learned background model is used to extract foreground pixels. Furthermore, the extracted foreground pixels are grouped to individual foreground objects. Then, five features of the above objects are computed to be similarity measurements, which are used for classifying and tracking objects. Specially, these five features could be computed during the grouping progress without additional iterations. This speeds up the whole processing time. Experiments have been performed on a surveillance system in outdoor environments and the results show the effectiveness of the proposed approach. An algorithm based on color characteristics and Kalman filter for motion detection and tracking is presented in [10]. It uses the HSV color space. The hue value in the HSV is essential to identify the object. The other

elements cannot be used to identify the object since they are not specific to the tracked object. Elena Stringa, et. al. [11] suggests a video-based surveillance system for the automatic detection of abandoned objects in indoor environments. This surveillance system integrates an advanced real time detection method with video indexing capabilities in order to establish a logical correlation between a suspicious object and the person who left it in a given environment by allowing the human operator to easily retrieve the image or the clip of interest while scanning a large video library. R. Cucchiara, et. al. [12] describes an approach for Moving Visual Objects segmentation in an unstructured traffic environment. It considers a complex situation with moving people, vehicles, infrastructures that have different aspect model and motion model. In this, a specific approach based on background subtraction with a statistic and knowledge-based background update is given. This approach allows a limited number of frames for the background update suitable for real time computation. Foresti, et. al. [13] has given the simple difference method, the Derivative method (DM) and Shading Method (SM). These methods are based on difference between two consecutive frames. [14] Presents a two step method to speed up object detection systems in computer vision that use Support Vector Machines (SVMs) as classifiers. Mathias Pingault, et. al. describes a method [15], for estimating optical flow by a generalization of the brightness constancy assumption to additive transparencies. The brightness constancy assumption is obtained by setting constant velocity fields during three images of a sequence. In order to suppress the unavoidable aperture problem, a global model based on B-spline basis functions is applied with the aim of constraining optical flows. This description of motion allows to work on a coarse to fine estimation of artificial image sequences. B. Heisele, et. al. presents an algorithm [16] for tracking moving objects in a sequence of colored images. In this object parts are determined by a divisive clustering algorithm, which is applied to all pixels in the first image of the sequence. For each new image the clusters of the previous frame are adapted iteratively by a parallel k-means clustering algorithm. This algorithm being complex requires more computational time. So there is a requirement to increase the speed. [19] Uses image moments to define a distance function between two circular regions. With this method, rotation and updating the old template in each frame is not required. Real time hand tracking and gesture recognition system is presented in [20]. It consists of three modules: real time hand shaking, training gesture and gesture recognition using pseudo two dimension hidden Markov models. It uses a Kalman filter and hand blobs analysis for hand tracking to obtain motion descriptors and hand region. It is fairly robust to background cluster and uses skin color for hand gesture tracking and recognition. An efficient algorithm for motion vector analysis, based on optical flow, used to segment moving objects and obstacles are presented in [21]. Circle detection and

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

50

Page 3: track moving object

tracking speed up based on change driven image processing is presented in [22]. This approach takes only the pixels of the image sequence that are significant for the motion estimation algorithm being implemented, reducing the amount of data to be processed and increasing the algorithm speed. From the literature survey, there is a need to develop a time efficient algorithm, which can work in noisy and variation in illumination conditions also. In this paper, Partitioning and Normalized Cross Correlation (PNCC) based algorithm is proposed for the detection of moving object. This algorithm takes less processing time, which increases the speed and also the detection rate is better than simple difference (SD) and background subtraction (BS) methods. The proposed research work involves a technique for the moving object detection and tracking in order to optimize time performance analysis. 3. Difficulties in moving object detection and tracking. Tracking of detected moving object in an image sequence is a significant and difficult task. It is a crucial part of smart surveillance system, since without tracking, the system could not extract cohesive temporal information about objects and higher level behavior analysis steps could not be possible. On the other hand, due to occlusions and reflections tracking becomes a difficult research problem. Most tracking systems often fail under some situations. This could be either because of illumination changes, pose variations or occlusions. Therefore the need for automatic performance evaluation emerges in these applications. Short and long term dynamic scene changes such as repetitive motions (e.g. waiving tree leaves), light reflectance, shadows, camera noise and sudden illumination variations make reliable and fast moving object detection difficult. Hence, it is important to pay necessary attention to object detection step to have reliable, robust and fast visual surveillance system. 4. Theory of NCC (Normalized Cross Correlation) Correlation is mainly used for measuring similarity between two images. It is useful in feature recognition and registration. Normalized cross correlation is given by equation (1).

(1)

In this, A and B indicates average pixel value in image A and B respectively. ‘r’ is normalized with respect to both the images and it always lies in the range [-1, 1]. 5. System Overview Basic steps involved in the process are given in figure 1. As shown, input image sequence is taken from the static camera. Two consecutive frames from the image sequence are partitioned into four quadrants. Then

moving object detection takes place after finding Normalized Cross Correlation between two partitioned frames. Moving Object detection in video involves verifying the presence of an object in image sequence and possibly locating it precisely for recognition. After detecting the moving object, the location of the moving object is obtained by performing component connected analysis. Tracking of the detected moving object takes place by calculating the centroids of the detected moving object. Tracking means the detection of a target over time, thus establishing its trajectory. The aim of object tracking is to establish a correspondence between objects or object parts in consecutive frames and to extract temporal information about objects such as trajectory, posture, speed and direction. Image sequence

Figure 1. Basic steps. 6. Algorithm Basic algorithm steps for the detection and tracking of moving objects are given below. • Read two consecutive frames from the image

sequence called as current frame and previous frame.

• Divide these frames into four quadrants. • For ex: Current frame is divided into four parts

called as x1, x2, x3 and x4. Similarly, previous frame is divided into four parts called as y1, y2, y3 and y4.

• Now find out the NCC of each sub image of current frame with the previous frame. After this there are four values of NCC, called as c1, c2, c3 and c4.

• Now find out the minimum value of NCC from these four values.

• To this minimum value of NCC apply the threshold. • The threshold value is selected by taking average of

four NCC values (i.e. c1, c2, c3 and c4). • Suppose the minimum value of NCC is obtained at

the first quadrant, it means that the moving object is present in that quadrant.

• Now operate in the first quadrant. Take the difference between the first quadrants of two consecutive frames.

Partitioning of two consecutive frames

Moving object detection using Cross Correlation

Identify moving object’s location and perform tracking

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

51

Page 4: track moving object

• Then find the location of the moving object by performing component connected analysis and morphological processing.

• Centroid calculation is done for tracking the moving object.

• After this the second minimum value from the c1, c2, c3 and c4 is obtained. This is performed to check whether any other moving object is present in other part of the image.

• If the second minimum value is also greater than threshold then it means that the moving object is present in that quadrant. Now, identify the location of second moving object and track that object.

• Repeat the same procedure for the next frame. 7. Experimental Results 7.1 Hardware and Software Platform All these algorithms have been tested and implemented on Windows XP platform using MATLAB 7. Computer with an Intel Pentium Dual Core inside 1600 MHz CPU and 384MB RAM. Image sequences are acquired with Logitech camera, 352 * 288 pixels. Frame capture rate is 5 frames / second. The algorithm has been tested on a variety of indoor and outdoor environments. Some image sequences are standard or test image sequences, while some image sequences are captured by using web camera. To ensure a good variety of data, the images were taken during different times, different days and on different roads. 7.2 Performance Evaluation Performance Evaluation for moving object detection and tracking is given in terms of

• Qualitative analysis • Quantitative analysis • Time performance analysis.

7.2.1 Qualitative analysis: The most reliable approach for qualitative/visual evaluation is to display a flicker animation, a short movie file containing a registered pair of images (I1(x), I2(x)) that are played in rapid succession at intervals of about a second each. Also qualitative/visual evaluation is to display the sequence to point the tracking of the detected moving object. Moving object detection and tracking results are shown from figure 2 to 9. Tracking results are shown by a pointer. For the sequence shown in figure 2, a person is moving from left to right. It is pointed by a star. Similarly for different image sequences, different pointers are used. Figure 3 shows the tracking sequence of a car. Figure 4, 5 and 6 shows the tracking sequence of multiple objects. (i.e. fishes). In this image sequence the shape of fish undergoes sudden deformation. (i.e. non rigid movement). In the same image sequence repetitive motion (water) and variation in illumination is observed. Hence such motion is difficult to track. As observed in figure 4 and 5, SD and BS method fails in detecting and tracking fish sequence. But as observed in figure 6, PNCC method provides better tracking results.

(a) (b) (c)

Figure 2. The tracking sequence of a walking person. This walking person is pointed by a red star.

(a) (b) (c)

Figure 3. Tracking sequence of a car. The tracking of a car is pointed by a pink arrow.

(a) (b) (c)

Figure 4. Tracking sequence of multiple objects by simple difference method

(a) (b) (c)

Figure 5. Tracking sequence of multiple objects by background subtraction method.

(a) (b) (c)

Figure 6. Tracking sequence of multiple objects by PNCC method

(a) (b) (c)

Figure 7. Tracking sequence of a walking person.

(a) (b) (c) Figure 8. Tracking sequence of two moving objects.

(a) (b) (c)

Figure 9. Tracking sequence of two moving objects in low contrast image sequence.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

52

Page 5: track moving object

Figure 7 shows the image sequence of a walking person. In this the walking person is pointed by a square. As shown in figure 8, some part of the image sequence is very bright and the other part is in the shadow. There are two moving objects. These are pointed by square. Figure 9 shows image sequence of two moving objects. This image sequence is of low contrast and low brightness, with two moving objects. These moving objects are pointed by stars. 7.2.2 Quantitative analysis This is the process of establishing the “correct answer” for what exactly the algorithm is expected to produce. [17] [18]. There are two metrics for characterizing the Detection Rate (DR) and the False Alarm Rate (FAR) of the system. These rates, used to quantify the output of the system, are based on: Moving object can be detected (positive) or not detected (negative) and a decision for a detection result can be either correct (true) or incorrect (false). A decision for a detection result therefore will be one of these possible categories as mentioned below. TP (true positive): detected regions that correspond to moving objects, FP (false positive): detected regions that do not correspond to a moving object, (also known as false alarms). FN (false negative): moving objects not detected. (also known as misses). These scalars are combined to define the following metrics: DR = TP / (TP +FN) (2) FAR = FP / (TP+FP) (3) TP, FP and FN values for different image sequences are shown in Table 1. There are three values for TP and FN for S2 image sequence. It means there are three moving objects in S2 image sequence. Similarly for S4 and S11 image sequence, two moving objects are present.

Image Sequence

TP FP FN

S1 115 4 5 S2 60 77 18 7 31 5 23 S3 18 2 1 S4 21 17 1 14 3 S5 51 4 2 S6 7 2 1 S7 6 2 0 S8 16 2 4 S9 15 3 9 S10 33 2 1 S11 22 15 2 6 6

Table 1. TP, FP and FN of different image sequences.

From the obtained values of TP, FP and FN, Detection rate and False alarm rate obtained, as shown in Table 2.

Image Sequence

Detection Rate %

False Alarm Rate %

S1 95 3 S2 65 93 43 10 8 28 S3 94 10 S4 60 85 5 5 S5 96 7 S6 88 22 S7 100 25 S8 80 11 S9 62 16 S10 97 5 S11 78 74 8 11

Table 2. DR and FAR for different image sequences.

7.2.3 Time performance analysis Temporal performances are evaluated by estimating how many CPU seconds the system takes to process an image of a sequence, which is the average processing time per frame as shown in table 3. 7.3. Comparison with Simple Difference (SD) method. It is the simplest method for moving object detection and tracking. This method attempts to detect moving regions by making use of the pixel by pixel difference of consecutive frames in an image sequence. The two-frame temporal differencing scheme suggests that a pixel is a foreground pixel, if it satisfies the following equation. | It(x,y) - It-1(x,y) | > T (4) where, It(x,y) & It-1(x,y) represents the graylevel intensity value at pixel position (x,y), for current and previous frames respectively. ‘T’ is a threshold. It is selected experimentally. Pixels that satisfy equation (4) are marked as foreground. For comparison purpose, some experimental results of moving object detection and tracking of an image sequence are shown in figure 4 and 6, by the use of simple difference and PNCC method respectively. It is observed that the simple difference method is sensitive to noise and variation in illumination. Therefore, the moving object is not detected properly. The SD method is not able to detect the relevant pixels of the moving object from the image sequence. In noisy and variation in illumination, the SD method gives more number of false foreground pixels, therefore it provides false detection. It results in the failure of SD method in noisy and variation in illumination conditions. In poor lighting conditions the difference between graylevel of pixels are less. Therefore less number of foreground pixels obtained. Therefore poor lighting conditions affect the performance. 7.4. Comparison with Background Subtraction (BS) method Background subtraction method detects moving objects by subtracting background frame from the current frame. It attempts to detect moving regions by subtracting the current image pixel by pixel from a reference background image. The pixels, where the difference is above threshold are classified as foreground. A pixel at location (x,y) in the current image ‘It’, is marked as foreground, if equation (5) is satisfied. | It(x,y) - Bt(x,y) | > T (5) where ‘It’ and ‘Bt’ are current and background frames respectively and ‘T’ is a threshold. As observed from figure 5, there is lots of misdetection because of variation in illumination and repetitive motion of the water. This method takes long time for the estimation of background model. It is observed that, unfortunately the background subtraction method fails for the frames for which the illumination was changed. Many static objects are detected as moving objects due to illumination change.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

53

Page 6: track moving object

Sudden illumination change between frames would cause the complete frame to be regarded as the foreground. As sudden illumination change between two consecutive frames causes the pixel value to differ by a large number. However, by the use of PNCC method, these static objects are detected as background and the moving objects are well detected. Obviously, the PNCC method gives much better results. Table 3, gives the comparison of SD, BS and PNCC methods. As observed from table 3, the FAR has a high value for SD and BS methods.

Algorithm/ Parameter

DR in %

FAR in %

DR/ (DR+FAR)

in %

Avg Time (ms)

SD

S1 95 42 69 250 S2 91 55 62 538 S3 78 65 54 380 S4 87 56 60 254 S5 89 49 64 257 S6 87 50 63 328 S7 83 50 62 371 S8 95 50 65 619 S9 91 51 64 988 S10 97 50 65 242 S11 96 57 62 286

BS

S1 89 14 86 472 S2 88 38 69 391 S3 47 60 43 368 S4 89 16 84 401 S5 84 4 95 231 S6 88 46 65 336 S7 100 20 83 400 S8 78 25 75 277 S9 76 13 85 225 S10 96 11 89 282 S11 84 4 95 240

PNCC

S1 95 3 96 459 S2 67 15 87 435 S3 94 10 90 375 S4 72 5 93 333 S5 96 7 93 314 S6 88 22 80 409 S7 100 25 80 450 S8 80 11 87 345 S9 62 16 79 296 S10 97 5 95 332 S11 76 9 89 353

Table 3. Comparison of algorithms based on DR, FAR and average

processing time per frame. 8. Discussions and Conclusions An algorithm is proposed by Partitioning and Normalized Cross Correlation for the detection and tracking of moving object from the image sequence. Important advantage of this algorithm is that it requires very less preprocessing of the frames from image sequence (median filtering and contrast stretching). The algorithm is robust against changes in illumination and lighting conditions. In poor lighting conditions also the algorithm is giving better results. Comparison of simple difference method, background subtraction method and PNCC method is given in table 3. From table 3, it is observed that the Detection Rate is better for PNCC method as compare to SD and BS methods. Also the False alarm rate is less in PNCC method.

Graph1 is plotted for the DR/(DR+FAR) versus image sequences. It indicates the comparative result of SD, BS and Partitioning and Normalized Cross Correlation method. Also the average processing time for PNCC method is comparable to that of SD and BS methods. This algorithm gives better performance as the images are partitioned into different sub images and then cross correlation and identifying the location of the moving object is carried out. The average processing time required per frame in PNCC method is reduced as compared to the other methods.

1 2 3 4 5 6 7 8 9 10 1140

50

60

70

80

90

100

IMAGE SEQUENCES

DR

/(D

R+F

AR

)

SDBSPNCC

Graph 1. Graph of DR/ (DR + FAR) versus image sequences.

Even though there are variations in illumination and poor lighting conditions, the dissimilarity obtained at the locations of moving object generate large number of foreground pixels. Due to this pixels corresponding to the moving object are properly detected using PNCC method. Therefore PNCC algorithm provides better results against changes in illumination and poor lighting conditions. This algorithm can track non rigid movements as observed from figure 6. It means that this algorithm proved to be robust with respect to shape variations, illumination variations and noisy signal. Specialized hardware exists to perform the operation in real time for convolution and correlation. Two operations differ only by a 180 degree rotation of the kernel. It is possible to compute correlation using the same hardware or software which is used for convolution. The advances in the development of these algorithms would lead to breakthroughs in applications that use visual surveillance. References [1] Sang Hyun Kim, “A Novel Approach to Moving

Edge Detection Using Cross Entropy”, GVIP’05 Conference, pp.21-24, 19-21 December 2005, CICC, Cairo, Egypt.

[2] Richard J. Radke, Srinivas Andra, Omar Al-ofahi, Badrinath Roysam, “Image change Detection Algorithms: A Systematic Survey”, IEEE Transactions on Image Processing, Volume 14, No.3, pp. 294-307, March 2005.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

54

Page 7: track moving object

[3] Daviest, Palmert and mirmehdit, “Detection and tracking of Very Small low Contrast Objects”, Proceedings of the 9th British Machine Vision Conference, pp.1, 1998.

[4] Li Qun Xu, “Robust Detection and Tracking of Multiple Objects in Cluttered Scenes”, One Day BMVA symposium at the Royal Statistical Society, 12 Errol Street, London, UK, 24th March 2004.

[5] Bernad Heisele, “Motion-Based Object Detection and Tracking in Color Image Sequences”, Fourth Asian Conference on Computer Vision, pp.1028-1033, Taipei, 2000, Germany.

[6] Yiwei Wang, Robert E, Van Dyck and John F. Doherty, “Tracking moving objects in video sequences”, Conference on Information Sciences and Systems, vol. 2, pp.24-29, March 2000.

[7] Pascal Paysan, “Stereo vision based vehicle classification using support vector Machines”, thesis Submitted to the University of Applied Sciences, Fachhochschule Esslingenon, February 28, 2004.

[8] Cigdem Eroglu, Erdem, Bülent Sankur and A. Murat Tekalp, “Performance Measures for Video Object Segmentation and Tracking”, IEEE Transactions on Image Processing, Vol. 13, No. 7, pp. 937-951, July 2004.

[9] Gary Tsai, Adrian Chiang, Truman Yang, Chih-Chun Lai, Shyue-Wu Wang, Chen-Duo Liu, “Video Tracking and Recognition System”, Multimedia Lab, China.

[10] Mikhael Abou Nehme, Walid Khoury, Braheem Yameen, Mohamad Adnan Al-Alaoui, “Real Time Color Based Motion Detection and Tracking”, Proceedings of the 3rd IEEE International Symposium on Signal Processing and Information Technology (ISSPIT 2003), pp. 696-700, 14-17 December 2003.

[11] Elena Stringa, Member IEEE and Carlo S. Regazzoni, “Real-Time Video-Shot Detection for Scene Surveillance Applications”, IEEE Transactions on Image Processing, Vol. 9, No.1, pp. 69-79, January 2000.

[12] R.Cucchiara, C. Grana, M. Piccardi, A. Prati, “Statistic and Knowledge-based Moving Object Detection in Traffic Scenes”, Proceedings of Intelligent Transportation Systems 2000, pp.27–32, 2000.

[13] Gian Luca Foresti, Christian Micheloni, Lauro Sindaro, Paolo Remagnino and Tim Ellis, “Active Video-Based Surveillance System”, IEEE Signal Processing Magazine, Vol.22, No. 2, pp. 25-37, March 2005.

[14] Bernd Heisele, Thomas Serre Sayan Mukherjee Tomaso Poggio, “Feature Reduction and Hierarchy of Classifiers for Fast Object Detection in Video Images”, Center for Biological and Computational Learning, M.I.T. Cambridge, MA, USA.

[15] Mathias Pingault, Eric Bruno and Denis Pellerin, “A Robust Multiscale B-Spline Function Decomposition for Estimating Motion Transparency”, IEEE transactions on Image Processing, Vol. 12, No. 11, pp. 1416-1426, November 2003.

[16] B.Heisele, U. Krebel & W. Ritter, “Tracking Non-Rigid, Moving Objects based on Color Cluster Flow”, 1997 IEEE computer Society Conference on Computer Vision and Pattern Recognition, (CVPR’97), pp.257, 1997.

[17] Tae-Kyun Kim, Sung-Uk Lee, Jong-Ha Lee, Seok-CheolKee and Sang-Ryong Kim, “Integrated Approach of Multiple Face Detection for Video Surveillance”, The 16th International Conference on Pattern Recognition (ICPR’02), Vol.2, pp.394–397, August 2002.

[18] Isaac Cohen G´erard Medioni, “Detecting and Tracking Moving Objects for Video Surveillance“, IEEE Proc. Computer Vision and Pattern Recognition June 23-25, 1999. Fort Collins CO.

[19] Payman Haqiqat, “Using image Moments for Tracking Rotating Objects”, ARAS’05 Conference, 19-21 December 2005, CICC, Cairo, Egypt.

[20] Nguyen Dang Binh, Enokida Shuichi, Toshiaki Ejima, “Real Time Hand Tracking and Gesture Recognition System”, GVIP’05 Conference, 19-21 December 2005, CICC, Cairo Egypt.

[21] P. Foggia, A. Limongiello, M. Vento, “A moving object and Obstacle Detection system in real time AVG and AMR applications”, ARAS’05 Conference, 19-21 December 2005, CICC, Cairo, Egypt.

[22] Fernando Pardo, Jose A. Boluda, Julio C. Sosa, “Circle Detection and tracking speed up based on change driven image processing”, GVIP’05 Conference, 19-21 December 2005, CICC, Cairo Egypt.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

55

Page 8: track moving object

MANOJ SHRIKRISHNA NAGMODE, Received Degree of Master of Engineering in 1999, from Government College of Engineering, Pune, Maharashtra, India. He is pursuing his Ph. D in Image

Processing. Currently he is working as Assistant Professor, in Department of Electronics and Telecommunication Engineering, College of Engineering & Technology, Pune, Maharashtra, India. He has published total 11 papers: 1 in International Journal, 4 in International Conferences, and 6 in National Conferences. His area of interest includes Signal processing, Image Processing and VLSI Design.

ASHOK SAPKAL, Received Ph. D in E.&T.C from University of Pune. Master of Engineering in E.&T.C, from University of Pune, 1992. He has published total 23 papers, 14 in International conferences & 9 in National conferences. He is currently

working as Professor in E.&T.C. Dept., College of Engineering, Pune. His area of interest includes Power Electronics and Signal Processing.

MADHURI JOSHI, Received Ph. D in E.&T.C. from University of Pune. Currently working as Professor in E&TC Dept., College of Engineering, Pune, Maharashtra, India. She has published total 60 papers, 7 in National Journals, 38 in International

conferences & 15 in National conferences. Her area of interest includes Signal processing and Image Processing.

Contact: Name: Manoj Shrikrishna Nagmode, E-mail id: [email protected], [email protected], [email protected] Postal Address: 34, Jawahar Nagar, Ganesh Khind Road, Maharashtra, India, Pune 411016, Telephone.No: 91-020-9226771488.

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

56