ieee transactions on circuits and systems for video ... · and pattern recognition. alpr is also...

15
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013 311 Automatic License Plate Recognition (ALPR): A State-of-the-Art Review Shan Du, Member, IEEE, Mahmoud Ibrahim, Mohamed Shehata, Senior Member, IEEE, and Wael Badawy, Senior Member, IEEE Abstract —Automatic license plate recognition (ALPR) is the extraction of vehicle license plate information from an image or a sequence of images. The extracted information can be used with or without a database in many applications, such as electronic payment systems (toll payment, parking fee payment), and freeway and arterial monitoring systems for traffic surveillance. The ALPR uses either a color, black and white, or infrared camera to take images. The quality of the acquired images is a major factor in the success of the ALPR. ALPR as a real- life application has to quickly and successfully process license plates under different environmental conditions, such as indoors, outdoors, day or night time. It should also be generalized to process license plates from different nations, provinces, or states. These plates usually contain different colors, are written in different languages, and use different fonts; some plates may have a single color background and others have background images. The license plates can be partially occluded by dirt, lighting, and towing accessories on the car. In this paper, we present a comprehensive review of the state-of-the-art techniques for ALPR. We categorize different ALPR techniques according to the features they used for each stage, and compare them in terms of pros, cons, recognition accuracy, and processing speed. Future forecasts of ALPR are given at the end. Index Terms—Automatic license plate recognition (ALPR), automatic number plate recognition (ANPR), car plate recognition (CPR), optical character recognition (OCR) for cars. I. Introduction A UTOMATIC license plate recognition (ALPR) plays an important role in numerous real-life applications, such as automatic toll collection, traffic law enforcement, park- ing lot access control, and road traffic monitoring [1]–[4]. ALPR recognizes a vehicle’s license plate number from an image or images taken by either a color, black and white, or infrared camera. It is fulfilled by the combination of a Manuscript received May 21, 2011; revised February 21, 2012; accepted April 6, 2012. Date of publication June 8, 2012; date of current version February 1, 2013. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada and Alberta Innovates Technology Futures. This paper was recommended by Associate Editor Q. Tian. S. Du and M. Ibrahim are with IntelliView Technologies, Inc., Calgary, AB T2E 2N4, Canada (e-mail: [email protected]; [email protected]). M. Shehata is with the Department of Electrical and Computer Engineering, Faculty of Engineering, Benha University, Cairo 11241, Egypt (e-mail: [email protected]). W. Badawy is with the Department of Computer Engineering, College of Computer and Information System, Umm Al-Qura University, Makkah 21955, Saudi Arabia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCSVT.2012.2203741 Fig. 1. (a) Standard Alberta license plate. (b) Vanity Alberta license plate. lot of techniques, such as object detection, image processing, and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number plate recognition, and optical character recognition (OCR) for cars. The variations of the plate types or environments cause challenges in the detection and recognition of license plates. They are summarized as follows. 1) Plate variations: a) location: plates exist in different locations of an image; b) quantity: an image may contain no or many plates; c) size: plates may have different sizes due to the camera distance and the zoom factor; d) color : plates may have various characters and background colors due to different plate types or capturing devices; e) font : plates of different nations may be written in different fonts and language; f) standard versus vanity: for example, the standard license plate in Alberta, Canada, has three and recently (in 2010) four letters to the left and three numbers to the right, as shown in Fig. 1(a). Vanity (or customized) license plates may have any number of characters without any regulations, as shown in Fig. 1(b); g) occlusion: plates may be obscured by dirt; h) inclination: plates may be tilted; i) other : in addition to characters, a plate may con- tain frames and screws. 2) Environment variations: a) illumination: input images may have different types of illumination, mainly due to environmental lighting and vehicle headlights; b) background : the image background may contain patterns similar to plates, such as numbers stamped on a vehicle, bumper with vertical patterns, and textured floors. 1051-8215/$31.00 c 2012 IEEE

Upload: others

Post on 01-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013 311

Automatic License Plate Recognition (ALPR):A State-of-the-Art Review

Shan Du, Member, IEEE, Mahmoud Ibrahim, Mohamed Shehata, Senior Member, IEEE,and Wael Badawy, Senior Member, IEEE

Abstract—Automatic license plate recognition (ALPR) is theextraction of vehicle license plate information from an image ora sequence of images. The extracted information can be used withor without a database in many applications, such as electronicpayment systems (toll payment, parking fee payment), andfreeway and arterial monitoring systems for traffic surveillance.The ALPR uses either a color, black and white, or infraredcamera to take images. The quality of the acquired images isa major factor in the success of the ALPR. ALPR as a real-life application has to quickly and successfully process licenseplates under different environmental conditions, such as indoors,outdoors, day or night time. It should also be generalized toprocess license plates from different nations, provinces, or states.These plates usually contain different colors, are written indifferent languages, and use different fonts; some plates mayhave a single color background and others have backgroundimages. The license plates can be partially occluded by dirt,lighting, and towing accessories on the car. In this paper, wepresent a comprehensive review of the state-of-the-art techniquesfor ALPR. We categorize different ALPR techniques accordingto the features they used for each stage, and compare them interms of pros, cons, recognition accuracy, and processing speed.Future forecasts of ALPR are given at the end.

Index Terms—Automatic license plate recognition (ALPR),automatic number plate recognition (ANPR), car platerecognition (CPR), optical character recognition (OCR) for cars.

I. Introduction

AUTOMATIC license plate recognition (ALPR) plays animportant role in numerous real-life applications, such

as automatic toll collection, traffic law enforcement, park-ing lot access control, and road traffic monitoring [1]–[4].ALPR recognizes a vehicle’s license plate number from animage or images taken by either a color, black and white,or infrared camera. It is fulfilled by the combination of a

Manuscript received May 21, 2011; revised February 21, 2012; acceptedApril 6, 2012. Date of publication June 8, 2012; date of current versionFebruary 1, 2013. This work was supported in part by the Natural Sciences andEngineering Research Council of Canada and Alberta Innovates TechnologyFutures. This paper was recommended by Associate Editor Q. Tian.

S. Du and M. Ibrahim are with IntelliView Technologies, Inc., Calgary, ABT2E 2N4, Canada (e-mail: [email protected]; [email protected]).

M. Shehata is with the Department of Electrical and Computer Engineering,Faculty of Engineering, Benha University, Cairo 11241, Egypt (e-mail:[email protected]).

W. Badawy is with the Department of Computer Engineering, College ofComputer and Information System, Umm Al-Qura University, Makkah 21955,Saudi Arabia (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TCSVT.2012.2203741

Fig. 1. (a) Standard Alberta license plate. (b) Vanity Alberta license plate.

lot of techniques, such as object detection, image processing,and pattern recognition. ALPR is also known as automaticvehicle identification, car plate recognition, automatic numberplate recognition, and optical character recognition (OCR) forcars. The variations of the plate types or environments causechallenges in the detection and recognition of license plates.They are summarized as follows.

1) Plate variations:

a) location: plates exist in different locations of animage;

b) quantity: an image may contain no or many plates;c) size: plates may have different sizes due to the

camera distance and the zoom factor;d) color: plates may have various characters and

background colors due to different plate types orcapturing devices;

e) font: plates of different nations may be written indifferent fonts and language;

f) standard versus vanity: for example, the standardlicense plate in Alberta, Canada, has three andrecently (in 2010) four letters to the left andthree numbers to the right, as shown in Fig. 1(a).Vanity (or customized) license plates may haveany number of characters without any regulations,as shown in Fig. 1(b);

g) occlusion: plates may be obscured by dirt;h) inclination: plates may be tilted;i) other: in addition to characters, a plate may con-

tain frames and screws.

2) Environment variations:

a) illumination: input images may have differenttypes of illumination, mainly due to environmentallighting and vehicle headlights;

b) background: the image background may containpatterns similar to plates, such as numbers stampedon a vehicle, bumper with vertical patterns, andtextured floors.

1051-8215/$31.00 c© 2012 IEEE

Page 2: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

312 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

Fig. 2. Four stages of an ALPR system.

The ALPR system that extracts a license plate numberfrom a given image can be composed of four stages [5]. Thefirst stage is to acquire the car image using a camera. Theparameters of the camera, such as the type of camera, cameraresolution, shutter speed, orientation, and light, have to be con-sidered. The second stage is to extract the license plate fromthe image based on some features, such as the boundary, thecolor, or the existence of the characters. The third stage is tosegment the license plate and extract the characters by project-ing their color information, labeling them, or matching theirpositions with templates. The final stage is to recognize theextracted characters by template matching or using classifiers,such as neural networks and fuzzy classifiers. Fig. 2 shows thestructure of the ALPR process. The performance of an ALPRsystem relies on the robustness of each individual stage.

The purpose of this paper is to provide researchers with asystematic survey of existing ALPR research by categorizingexisting methods according to the features they used, analyzingthe pros or cons of these features, and comparing them in termsof recognition performance and processing speed, and to opensome issues for the future research.

The remainder of this paper is organized as follows. InSection II, license plate extraction methods are classified witha detailed review. Section III demonstrates character segmen-tation methods and Section IV discusses character recognitionmethods. At the beginning of each section, we define theproblem and its levels of difficulties, and then classify theexisting algorithms with our discussion. In Section V, wesummarize this paper and discuss areas for future research.

II. License Plate Extraction

The license plate extraction stage influences the accuracy ofan ALPR system. The input to this stage is a car image, and theoutput is a portion of the image containing the potential licenseplate. The license plate can exist anywhere in the image.Instead of processing every pixel in the image, which increasesthe processing time, the license plate can be distinguished byits features, and therefore the system processes only the pixelsthat have these features. The features are derived from thelicense plate format and the characters constituting it. License

plate color is one of the features since some jurisdictions(i.e., countries, states, or provinces) have certain colors fortheir license plates. The rectangular shape of the license plateboundary is another feature that is used to extract the licenseplate. The color change between the characters and the licenseplate background, known as the texture, is used to extractthe license plate region from the image. The existence of thecharacters can be used as a feature to identify the region ofthe license plate. Two or more features can be combined toidentify the license plate.

In the following, we categorize the existing license plateextraction methods based on the features they used.

A. License Plate Extraction Using Boundary/EdgeInformation

Since the license plate normally has a rectangular shapewith a known aspect ratio, it can be extracted by finding allpossible rectangles in the image. Edge detection methods arecommonly used to find these rectangles [8]–[11].

In [5], [9], and [12]–[15], Sobel filter is used to detect edges.Due to the color transition between the license plate and thecar body, the boundary of the license plate is represented byedges in the image. The edges are two horizontal lines whenperforming horizontal edge detection, two vertical lines whenperforming vertical edge detection, and a complete rectanglewhen performing both at the same time.

In [7], the license plate rectangle is detected by using thegeometric attribute for locating lines forming a rectangle.

Candidate regions are generated in [5], [9], [12], and [16]by matching between vertical edges only. The magnitude ofthe vertical edges on the license plate is considered a robustextraction feature, while using the horizontal edges only canresult in errors due to car bumper [10]. In [5], the verticaledges are matched to obtain some candidate rectangles. Rect-angles that have the same aspect ratio as the license plateare considered as candidates. This method yielded a resultof 96.2% on images under various illumination conditions.According to [9], if the vertical edges are extracted and thebackground edges are removed, the plate area can easily beextracted from the edge image. The detection rate in 1165images was around 100%. The total processing time of one384 × 288 image is 47.9 ms.

In [17], a new and fast vertical edge detection algorithm(VEDA) was proposed for license plate extraction. VEDAshowed that it is faster than Sobel operator by about sevento nine times.

The block-based method is also presented in the literature.In [18], blocks with high edge magnitudes are identified aspossible license plate areas. Since block processing does notdepend on the edges of the license plate boundary, it can beapplied to an image with an unclear license plate boundary.The accuracy of 180 pairs of images is 92.5%.

In [19], a license plate recognition-based strategy for check-ing inspection status of motorcycles was proposed. Experi-ments yielded a recognition rate of 95.7% and 93.9% basedon roadside and inspection station test images. It takes 654 mson a ultramobile personal computer and about 293 ms on a PCto recognize a license plate.

Page 3: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 313

Boundary-based extraction that uses Hough transform (HT)was described in [13]. It detects straight lines in the imageto locate the license plate. The Hough transform has theadvantage of detecting straight lines with up to 30° inclination[20]. However, the Hough transform is a time and memoryconsuming process. In [21], a boundary line-based methodcombining the HT and contour algorithm is presented. Itachieved extraction results of 98.8%.

The generalized symmetry transform (GST) is used toextract the license plate in [22]. After getting edges, the imageis scanned in the selective directions to detect corners. TheGST is then used to detect similarity between these cornersand to form license plate regions.

Edge-based methods are simple and fast. However, theyrequire the continuity of the edges [23]. When combinedwith morphological steps that eliminate unwanted edges, theextraction rate is relatively high. In [8], a hybrid method basedon the edge statistics and morphology was proposed. Theaccuracy of locating 9786 vehicle license plates is 99.6%.

B. License Plate Extraction Using Global Image Information

Connected component analysis (CCA) is an important tech-nique in binary image processing [4], [24]–[26]. It scans abinary image and labels its pixels into components based onpixel connectivity. Spatial measurements, such as area andaspect ratio, are commonly used for license plate extraction[27], [28]. Reference [28] applied CCA on low resolutionvideo. The correct extraction rate and false alarms are 96.62%and 1.77%, respectively, by using more than 4 h of video.

In [29], a contour detection algorithm is applied on thebinary image to detect connected objects. The connectedobjects that have the same geometrical features as the plate arechosen to be candidates. This algorithm can fail in the case ofbad quality images, which results in distorted contours.

In [30], 2-D cross correlation is used to find license plates.The 2-D cross correlation with a prestored license platetemplate is performed through the entire image to locate themost likely license plate area. Extracting license plates usingcorrelation with a template is independent of the license plateposition in the image. However, the 2-D cross correlation istime consuming. It is of the order of n4 for n × n pixels [14].

C. License Plate Extraction Using Texture Features

This kind of method depends on the presence of charactersin the license plate, which results in significant change inthe grey-scale level between characters color and licenseplate background color. It also results in a high edge densityarea due to color transition. Different techniques are used in[31]–[39].

In [31] and [39], scan-line techniques are used. The changeof the grey-scale level results in a number of peaks in the scanline. This number equals the number of the characters.

In [40], the vector quantization (VQ) is used to locate thetext in the image. VQ representation can gives some hintsabout the contents of image regions, as higher contrast andmore details are mapped by smaller blocks. The experimentalresults showed 98% detection rate and processing time of200 ms using images of different quality.

In [41], the sliding concentric windows (SCW) methodwas proposed. In this method, license plates are viewed asirregularities in the texture of the image. Therefore, the abruptchanges in the local characteristics are the potential licenseplate. In [42], a license plate detection method based on slidingconcentric windows and histogram was proposed.

Image transformations are also widely used in license plateextraction. Gabor filters are one of the major tools for textureanalysis [43]. This technique has the advantage of analyzingtexture in unlimited orientations and scales. The result in [44]is 98% when applied to images acquired in a fixed and specificangle. However, this method is time-consuming.

In [32], spatial frequency is identified by using discreteFourier transform (DFT) because it produces harmonics thatare detected in the spectrum analysis. The DFT is used ina row-wise fashion to detect the horizontal position of theplate and in a column-wise fashion to detect the verticalposition.

In [36], the wavelet transform (WT)-based method isused for the extraction of license plates. In WT, there arefour subbands. The subimage HL describes the verticaledge information and LH describes the horizontal one. Themaximum change in horizontal edges is determined byscanning the LH image and is identified by a reference line.Vertical edges are projected horizontally below this line todetermine the position based on the maximum projection. In[45], the HL subband is used to search the features of licenseplate and then to verify the features by checking if in the LHsubband there exists a horizontal line around the feature ornot. The execution time of license plate localization is lessthan 0.2 s with an accuracy of 97.33%.

In [46]–[48], adaptive boosting (AdaBoost) is combinedwith Haar-like features to obtain cascade classifiers for licenseplate extraction. The Haar-like features are commonly usedfor object detection. Using the Haar-like features makes theclassifier invariant to the brightness, color, size, and positionof license plates. In [46], the cascade classifiers use globalstatistics, known as gradient density, in the first layer andthen Haar-like features. Detection rate in this paper reached93.5%. AdaBoost is also used in [49]. The method presented adetection rate of 99% using images of different formats, size,and under various lighting conditions.

All the methods based on texture have the advantage ofdetecting the license plate even if its boundary is deformed.However, these methods are computationally complex, espe-cially when there are many edges, as in the case of a complexbackground or under different illumination conditions.

D. License Plate Extraction Using Color Features

Since some countries have specific colors for their licenseplates, some reported work involves the extraction of licenseplates by locating their colors in the image.

The basic idea is that the color combination of a plate andcharacters is unique, and this combination occurs almost onlyin a plate region [50]. According to the specific formats ofChinese license plates, Shi et al. [50] proposed that all thepixels in the input image are classified using the hue, lightness,and saturation (HLS) color model into 13 categories.

Page 4: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

314 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

In [51], a neural network is used to classify the color ofeach pixel after converting the RGB image into HLS. Neuralnetwork outputs, green, red, and white are the license platecolors in Korea. The same license plate color is projectedvertically and horizontally to determine the highest colordensity region that is the license plate region.

In [52], since only four colors (white, black, red, and green)are utilized in the license plates, the color edge detectorfocuses only on three kinds of edges (i.e., black–white, red–white, and green–white edges). In the experiment, 1088 imagestaken from various scenes and under different conditions areemployed. The license plate localization rate is 97.9%

Genetic algorithm (GA) is used in [53] and [54] as a searchmethod for identifying the license plate color. In [54], fromtraining pictures with different lighting conditions, a GA isused to determine the upper and lower thresholds for the platecolor. The relation between the average brightness and thesethresholds is described through a special function. For anyinput picture the average brightness is determined first, andthen from this function the lower and upper thresholds areobtained. Any pixel with a value between these thresholds islabeled. If the connectivity of the labeled pixels is rectangularwith the same aspect ratio of the license plate, the region isconsidered as the plate region.

In [55], Gaussian weighted histogram intersection is usedto detect the license plate by matching its color. To overcomethe various illumination conditions that affect the color level,conventional HI is modified by using Gaussian function. Theweight that describes the contribution of a set of similar colorsis used to match a predefined color.

The collocation of license plate color and characters color isused in [56] to generate an edge image. The image is scannedhorizontally and if any pixel that has a value within the licenseplate color range is found, the color value of its horizontalneighbors is checked. If two or more neighbors are within thesame character color range, this pixel is identified as an edgepixel in a new edge image. All edges in the new image areanalyzed to find candidate license plate regions.

In [57] and [58], color images are segmented by themean shift algorithm into candidate regions and subsequentlyclassified as a plate or not. The detection rate of 97.6% wasobtained. In [59], a fast mean shift method was proposed.

To deal with the problem of illumination variation asso-ciated with the color-based method, [60] proposed a fuzzylogic based method. The hue, saturation, and value (HSV)color space is employed. Three components of the HSV arefirst mapped to fuzzy sets according to different membershipfunctions. The fuzzy classification function is then describedby the fusion of three weighted membership degrees.

Reference [61] proposed a new approach for vehicle licenseplate localization using a color barycenters hexagon model thatis lower sensitive to the brightness.

Extracting a license plate using color information has theadvantage of detecting inclined and deformed plates. However,it also has several difficulties. Defining the pixel color usingthe RGB value is very difficult, especially in different illumi-nation conditions. The HLS, which is used as an alternativecolor model, is very sensitive to noise. Methods that use color

projection suffer from wrong detection, especially when someparts of the image have the same license plate color such asthe car body.

In [62], the HSI color model is adopted to select statisticalthreshold for detecting candidate regions. This method candetect candidate regions when vehicle bodies and license plateshave similar color. The mean and standard deviation of hue areused to detect green and yellow license plate pixels. Those ofsaturation and intensity are used to detect green, yellow, andwhite license plate pixels from vehicle images.

E. License Plate Extraction Using Character Features

License plate extraction methods based on locating itscharacters have also been proposed. These methods examinethe image for the presence of characters. If the characters arefound, their region is extracted as the license plate region.

In [63], instead of using properties of the license platedirectly, the algorithm tries to find all character-like regions inthe image. This is achieved by using a region-based approach.Regions are enumerated and classified using a neural network.If a linear combination of character-like regions is found, thepresence of a whole license plate is assumed.

The approach used in [64] is to horizontally scan the image,looking for repeating contrast changes on a scale of 15 pixelsor more. It assumes that the contrast between the charactersand the background is sufficiently good and there are at leastthree to four characters whose minimum vertical size is 15pixels. A differential gradient edge detection approach is madeand 99% accuracy was achieved in outdoor conditions.

In [65], binary objects that have the same aspect ratio ascharacters and more than 30 pixels are labeled. The Houghtransform is applied on the upper side of these labeled objectsto detect straight lines. The same happens on the lower partof these connected objects. If two straight lines are parallelwithin a certain range and the number of the connected objectsbetween them is similar to the characters, the area betweenthem is considered as the license plate area.

In [66], the characters are extracted using scale-space anal-ysis. The method extracts large-size blob-type figures thatconsist of smaller line-type figures as character candidates.

In [67], the character region is first recognized by iden-tifying the character width and the difference between thebackground and the character region. The license plate is thenextracted by finding the inter-character distance in the plateregion. This method yielded an extraction rate of 99.5%.

In [68], an initial set of possible character regions areobtained by the first stage classifier and then passed to thesecond stage classifier to reject noncharacter regions. Thirty-six AdaBoost classifiers serve as the first stage classifier. Inthe second stage, a support vector machine (SVM) trainedon scale-invariant feature transform (SIFT) descriptors is em-ployed. In [69], maximally stable extremal regions are usedto obtain a set of character regions. Highly unlike regions areremoved with a simplistic heuristic-based filter. The remainingregions with sufficient positively classified SIFT keypoints areretained as likely license plate regions.

These methods of extracting characters from the binaryimage as defining the license plate region are time consum-

Page 5: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 315

TABLE I

Pros and Cons of Each Class of License Plate Extraction Methods

Methods Rationale Pros Cons ReferencesUsing boundary features The boundary of license

plate is rectangular.Simplest, fast andstraightforward.

Hardly be applied to complex im-ages since they are too sensitive tounwanted edges.

[5], [8]–[16]

Using global image features Find a connected objectwhose dimension is like alicense plate.

Straightforward, inde-pendent of the licenseplate position.

May generate broken objects. [27]–[30]

Using texture features Frequent color transition onlicense plate.

Be able to detect evenif the boundary is de-formed.

Computationally complex whenthere are many edges.

[31], [39]–[41]

Using color features Specific color on licenseplate.

Be able to detect in-clined and deformedlicense plates.

RGB is limited to illumination con-dition, HLS is sensitive to noise.

[50]–[52]

Using character features There must be characters onthe license plate.

Robust to rotation. Time consuming (processing all bi-nary objects), produce detection er-rors when other text in the image.

[63], [64]

Using two or more features Combining features is moreeffective.

More reliable. Computationally complex. [70]–[72], [74], [81]

ing because they process all binary objects. Moreover, thesemethods produce errors when there is other text in the image.

F. License Plate Extraction Combining Two or More Features

In order to effectively detect the license plate, many methodssearch two or more features of the license plate. The extractionmethods in this case are called hybrid extraction methods [47].

Color feature and texture feature are combined in [70]–[74]. In [70], fuzzy rules are used to extract texture featureand yellow colors. The yellow color values, obtained fromsample images, are used to train the fuzzy classifier of thecolor feature. The fuzzy classifier of the texture is trainedbased on the color change between characters and license platebackground. For any input image, each pixel is classified if itbelongs to the license plate based on the generated fuzzy rules.In [71], two neural networks are used to detect texture featureand color feature. One is trained for color detection and theother is trained for texture detection using the number of edgesinside the plate area. The outputs of both neural networks arecombined to find candidate regions. In [72], only one neuralnetwork is used to scan the image by using H × W window,similar to the license plate size, and to detect color and edgesinside this window to decide if it is a candidate. In [73], theneural network is used to scan the HLS image horizontallyusing a 1 × M window where M is approximately the licenseplate width, and vertically using an N ×1 window where N isthe license plate height. The hue value for each pixel is used torepresent the color information and the intensity is to representthe texture information. The output of both the vertical andthe horizontal scan is combined to find candidate regions.Time-delay neural network (TDNN) is implemented in [74]to extract plates. Two TDNNs are used for analyzing colorand texture of the license plate by examining small windowsof vertical and horizontal cross sections of the image.

In [75], the edge and the color information are combinedto extract the plate. High edge density areas are considered asplate if their pixel values are the same as the license plate.

In [80], the statistical and the spatial information of thelicense plate is extracted using the covariance matrix. The

single covariance matrix extracted from a region has enoughinformation to match the region in different views. A neuralnetwork trained on the covariance matrix of license plate andnonlicense plate regions is used to detect the license plate.

In [81], the rectangle shape feature, the texture feature, andthe color feature are combined to extract the license plate. 1176images that were taken from various scenes and conditions areused. The success rate is 97.3%.

In [43], the raster scan video is used as input with lowmemory utilization. Gabor filter, threshold, and connectedcomponent labeling are used to obtain plate region.

In [75], wavelet transform is used to detect edges of theimage. After the edge detection, the morphology in image isused to analyze the shape and the structure of the image tostrengthen the structure to locate the license plate.

In [76], a method applies HL subband feature of 2-D DWTtwice to significantly highlight the vertical edges of licenseplates and suppress the background noise. Then, promisingcandidates of license plates are extracted by first-order localrecursive Otsu’s segmentation and orthogonal projection his-togram analysis. The most probable candidate is selected byedge density verification and aspect ratio constraint.

In [77], the license plate is detected using local structurepatterns computed from the modified census transform. Then,two-part postprocessing is used to minimize false positiverates. One is the position-based method that uses the positionalrelation between a license plate and a possible false positivewith similar local structure patterns, such as headlights orradiators. The other is the color-based method that uses theknown color information of license plates.

Reference [78] proposed a method using wavelet analysisand improved HLS color decomposition and Hough linedetection.

G. Discussion

In this section, we described existing license plate extractionmethods and classified them based on the features they used.In Table I, we summarize them and discuss the pros and consof each class of methods.

Page 6: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

316 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

In Table IV, we highlight some typical ALPR systemspresented in the literature. The techniques used in the mainprocedures are summarized. The performances of license plateextraction using different methods are shown.

In the literature, experimentation setups are normally re-stricted to well-defined conditions, e.g., vehicle position andillumination. To overcome the problem of varying illumina-tion, infrared (IR) units have been used. This method emergedfrom the nature of the license plate surface (retroreflectivematerial) and has been already tested in the literature [63],[75], [82], [83]. In [75], a detection rate of 99.3% wasachieved for 2483 images of Iranian vehicles captured usingIR illumination units. IR cameras are also used in somecommercial systems. An ALPR system [84] from Motorolaand PIPS Technology acts as a silent partner in the vehicle,constantly scanning license plates of passed vehicles. When avehicle of interest is passed, the system can alert the officer andrecord the time and GPS coordinates. The IBM Haifa ResearchLaboratory [85] developed an LPR engine for the Stockholmroad-charging project. Nedap [86] automatic vehicle identifi-cation and vehicle access control applications claim that wheninstalled properly an approximate 98% accuracy typically canbe achieved. Geovision [87] license plate recognition systemuses advanced neural networks technology to capture vehiclelicense plates. The system can reach up to 99% recognitionsuccess with high recognition speed (< 0.2 s). In [82], Naito etal. studied the ALPR problem from the viewpoint of the sensorsystem. The authors claimed that the existing dynamic rangeof a conventional CCD video camera is insufficient for ALPRpurposes. Therefore, the sensor system is upgraded to a doubledynamic range using two CCDs and a prism that splits anincident ray into two lights of different intensities. In testing,the input image is binarized using Otsu’s method [88] andthe character regions are extracted exploiting the focal lengthof the sensor to estimate the character size. Recognition ratesare over 99% for conventional plates and over 97% for highlyinclined plates from −40° to 40°. Regarding the camera-to-cardistance, as reported in [4], license plate height should be atleast 20–25 pixels to facilitate the character segmentation andrecognition.

III. License Plate Segmentation

The isolated license plate is then segmented to extract thecharacters for recognition. An extracted license plate fromthe previous stage may have some problems, such as tilt andnonuniform brightness. The segmentation algorithms shouldovercome all of these problems in a preprocessing step.

In [51] and [89], the bilinear transformation is used to mapthe tilted extracted license plate to a straight rectangle.

In [90], a least-squares method is used to treat horizontaltilt and vertical tilt in license plate images.

In [91], according to Karhunen–Loeve transform, the co-ordinates of characters are arranged into a 2-D covariancematrix. The eigenvector and the rotation angle α are computedin turn. Then, image horizontal tilt correction is performed. Forvertical tilt correction, three methods K-L transform, the linefitting based on K-means clustering, and the line fitting based

on least squares are put forward to compute the vertical tiltangle θ.

In [92], a line fitting method based on the least-squaresfitting with perpendicular offsets was introduced for correctinga license plate tilt in the horizontal direction. Tilt correctionin the vertical direction by minimizing the variance of co-ordinates of the projection points was proposed. Charactersegmentation is performed after horizontal correction andcharacter points are projected along the vertical direction aftershear transform.

Choosing an inappropriate threshold for the binarizationof the extracted license plate results in joined characters.These characters make the segmentation very difficult [90].License plates with a surrounding frame are also difficultto segment since after binarization, some characters may bejoined with the frame [93]. Enhancing the image qualitybefore binarization helps in choosing the appropriate threshold[93]. Techniques commonly used to enhance the license plateimage are noise removal, histogram equalization, and contrastenhancement. In [93], a system was proposed to conductgradient analysis on the whole image to detect the licenseplate and then the detected license plate is enhanced by greylevel transformation. A method to enhance only the charactersand to reduce the noise was proposed in [94]. The size ofthe characters is considered to be approximately 20% ofthe license plate size. First, the grey-scale level is scaled to0–100, then the largest 20% pixels are multiplied by 2.55.Only characters are enhanced while noise pixels are reduced.Since binarization with one global threshold cannot alwaysproduce acceptable results, adaptive local binarization methodsare normally used. In [95], local thresholding is used for eachpixel. The threshold is computed by subtracting a constantc from the mean grey level in an m × n window centeredat the pixel. In [96], the threshold is given by the Niblackbinarization formula to vary the threshold over the image,based on the local mean and the standard deviation.

In the following, we categorize the existing license platesegmentation methods based on the features they used.

A. License Plate Segmentation Using Pixel Connectivity

Segmentation is performed in [12], [30], [52], and [97]–[99]by labeling the connected pixels in the binary license plateimage. The labeled pixels are analyzed and those which havethe same size and aspect ratio of the characters are consideredas license plate characters. This method fails to extract all thecharacters when there are joined or broken characters.

B. License Plate Segmentation Using Projection Profiles

Since characters and license plate backgrounds have dif-ferent colors, they have opposite binary values in the binaryimage. Therefore, some proposed methods as in [15], [21],[24], [32], [50], [51], [74], and [100]–[104] project the binaryextracted license plate vertically to determine the starting andthe ending positions of the characters, and then project the ex-tracted characters horizontally to extract each character alone.In [15], along with noise removal and character sequenceanalysis, vertical projection is used to extract the characters.

Page 7: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 317

TABLE II

Pros and Cons of Each Class of License Plate Segmentation Methods

Methods Pros ConsUsing pixel connectivity [12], [30] Simple and straightforward, robust

to the license plate rotation.Fails to extract all the characters when there arejoined or broken characters.

Using projection profiles [21], [24],[51], [101]

Independent of character positions,be able to deal with some rotation.

Noise affects the projection value, requires priorknowledge of the number of license plate char-acters.

Using prior knowledge of characters [6],[14], [105], [106]

Simple. Limited by the prior knowledge, any change mayresult in errors.

Using character contours [107], [108] Can get exact character boundaries. Slow and may generate incomplete or distortedcontour.

Using combined features [111], [112] More reliable. Computationally complex.

By examining more than 30 000 images, this method reachedthe accuracy rate of 99.2% with a 10–20 ms processing speed.In [51] and [101], character color information is used inthe projection instead of using the binary license plate. Byreviewing the literature, it is evident that the method thatexploits vertical and horizontal projections of the pixels is themost common and simplest one.

The pro of the projection method is that the extraction ofcharacters is independent of their positions. The license platecan be slightly rotated. However, it depends on the imagequality. Any noise affects the projection value. Moreover, itrequires prior knowledge of the number of plate characters.

C. License Plate Segmentation Using Prior Knowledge ofCharacters

Prior knowledge of characters can help the segmentationof the license plate. In [14], the binary image is scannedby a horizontal line to find the starting and ending positionsof the characters. When the ratio between characters pixelsto background pixels in this line exceeds a certain thresholdafter being lower than this threshold, this is considered as thestarting position of the characters. The opposite is done to findthe ending position of the characters.

In [6], the extracted license plate is resized into a knowntemplate size. In this template, all character positions areknown. After resizing, the same positions are extracted to bethe characters. This method has the advantage of simplicity.However, in the case of any shift in the extracted license plate,the extraction results in background instead of characters.

In [105], the proposed approach provides a solution forthe vehicle license plates that are degraded severely. Colorcollocation is used to locate the license plate in the image.Dimensions of each character are used to segment thecharacter. The layout of the Chinese license plate is used toconstruct a classifier for recognition.

The license plates in Taiwan are all in the same colordistribution [106], i.e., black characters and white background.If the license plate is scanned with a horizontal line, thenumber of black to white (or white to black) transitions is atleast 6 and at most 14. Hough transform is used to correctthe rotation problem, the hybrid binarization technique isused to segment the characters in the dirty license plate, andfeedback self-learning procedure is employed to adjust theparameters. In the experiment, 332 different images are used

captured under various illuminations and at different distances.The overall location and segmentation rates are 97.1% and96.4%.

D. License Plate Segmentation Using Character Contours

Contour modeling is also employed for character segmen-tation. In [108] a shape driven active contour model is estab-lished, which utilizes a variational fast marching algorithm.The system works in two steps. First, rough location of eachcharacter is found by an ordinary fast marching technique[109] combined with a gradient-dependent and curvature-dependent speed function [110]. Then, the exact boundariesare obtained by a special fast marching method.

E. License Plate Segmentation Using Combined Features

In order to efficiently segment the license plate, two ormore features of the characters can be used. In [111], anadaptive morphology based segmentation approach for se-riously degraded plate images was proposed. An algorithmbased on the histogram detects fragments and merges thesefragments. A morphological thickening algorithm [113] lo-cates reference lines for separating the overlapped characters.A morphological thinning algorithm [114] and the segmen-tation cost calculation determine the baseline for segment-ing the connected characters. For 1189 degraded images,the entire character content is correctly segmented in 1005of them.

In [115], a method was described for segmenting themain numeric characters on a license plate by introducingdynamic programming (DP). The proposed method func-tions very rapidly by applying the bottom-up approach ofthe DP algorithm and also robustly by minimizing theuse of environment-dependent features such as color andedges. The success rate for detection of four main numbersis 97.14%.

F. Discussion

In this section, we described existing license plate segmen-tation methods and classified them based on the features theyused. In Table II, we summarize them and discuss the prosand cons of each class of methods.

In Table IV, we highlight some typical ALPR systemspresented in the literature. The techniques used in the mainprocedures are summarized. The performances of license platesegmentation using different methods are shown.

Page 8: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

318 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

IV. Character Recognition

The extracted characters are then recognized and the outputis the license plate number. Character recognition in ALPRsystems may have some difficulties. Due to the camera zoomfactor, the extracted characters do not have the same size andthe same thickness [30], [93]. Resizing the characters intoone size before recognition helps overcome this problem. Thecharacters’ font is not the same all the time since differentcountries’ license plates use different fonts. Extracted charac-ters may have some noise or they may be broken [30]. Theextracted characters may also be tilted [30].

In the following, we categorize the existing character recog-nition methods based on the features they used.

A. Character Recognition Using Raw Data

Template matching is a simple and straightforward methodin recognition [5], [101]. The similarity between a characterand the templates is measured. The template that is the mostsimilar to the character is recognized as the target. Mosttemplate matching methods use binary images because thegrey-scale is changed due to any change in the lighting[90].

Template matching is performed in [5], [12], [30], [51], [93],and [116] after resizing the extracted character into the samesize. Several similarity measuring techniques are defined inthe literature. Some of them are Mahalanobis distance and theBayes decision technique [30], Jaccard value [51], Hausdorffdistance [116], and the Hamming distance [5].

Character recognition in [93] and [117] uses normalizedcross correlation to match the extracted characters with thetemplates. Each template scans the character column bycolumn to calculate the normalized cross correlation. Thetemplate with the maximum value is the most similar one.

Template matching is useful for recognizing single-font,nonrotated, nonbroken, and fixed-size characters. If a characteris different from the template due to any font change, rotation,or noise, the template matching produces incorrect recognition[90]. In [82], the problem of recognizing tilted characters issolved by storing several templates of the same character withdifferent inclination angles.

B. Character Recognition Using Extracted Features

Since all character pixels do not have the same importancein distinguishing the character, a feature extraction techniquethat extracts some features from the character is a good alter-native to the grey-level template matching technique [101]. Itreduces the processing time for template matching because notall pixels are involved. It also overcomes template matchingproblems if the features are strong enough to distinguishcharacters under any distortion [90]. The extracted featuresform a feature vector which is compared with the pre-storedfeature vectors to measure the similarity.

In [101] and [119], the feature vector is generated byprojecting the binary character horizontally and vertically. In[119], each projection is quantized into four levels. In [102],the feature vector is generated from the Hotelling transform ofeach character. The Hotelling transform is very sensitive to the

segmentation result. In [120], the feature vector is generated bydividing the binary character into blocks of 3×3 pixels. Then,the number of black pixels in each block is counted. In [97],the feature vector is generated by dividing the binary characterafter a thinning operation into 3 × 3 blocks and countingthe number of elements that have 0°, 45°, 90°, and 135°inclination. In [121], the character is scanned along a centralaxis. This central axis is the connection between the upperbound horizontal central moment and lower bound horizontalcentral moment. Then the number of transitions from characterto background and spacing between them form a feature vectorfor each character. This method is invariant to the rotation ofthe character because the same feature vector is generated. In[122], the feature vector is generated by sampling the charactercontour all around. The resulted waveform is quantized intothe feature vector. This method recognizes multifont and multi-size characters since the contour of the character is not affectedby any font or size change. In [123], the Gabor filter is used forfeature extraction. The character edges whose orientation hasthe same angle as the filter will have the maximum respondto the filter. This can be used to form feature vector foreach character. In [124], Kirsch edge detection is appliedon the character image in different directions to extract fea-tures. Using Kirsch edge detection for feature extraction andrecognition achieved better results than other edge detectionmethods, such as Prewitt, Frei Chen, and Wallis [125]. In[126], the feature vector is extracted from the binary characterimage by performing thinning operation and then convertingthe direction of the character strokes into one code. In [127],pixels’ grey-scale values of 11 subblocks as the features arefed into a neural network classifier. In [128], a scene isprocessed by visiting nonoverlapping 5×5 blocks, processingthe surrounding image data to extract “spread” edge featuresbased on the research conducted in [129], and classifyingthis subimage according to the coarse-to-fine search strategydescribed in [130]. In [49], three character features contour-crossing counts, directional counts, and peripheral backgroundarea are used. The classification is realized by a support vectormachine. In [52], the topological features of characters—the number of holes, endpoints, three-way nodes, and four-way nodes—are used. These features are invariant to spatialtransformations.

After feature extraction, many classifiers can be used torecognize characters, such as ANN [127], SVM [74], HMM[95]. Some researchers integrate two kinds of classifica-tion schemes [131], [132], multistage classification schemes[133], or a “parallel” combination of multiple classifiers[134], [135].

C. Discussion

In this section, we described existing character recognitionmethods and classified them based on the features they used.In Table III, we summarize them and discuss the pros andcons of each class of methods.

In Table IV, we highlight some typical ALPR systemspresented in the literature. The techniques used in the mainprocedures are summarized. The performances of character

Page 9: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 319

TABLE III

Pros and Cons of Each Class of Character Recognition Methods

Methods Pros ConsUsing pixelvalues

Template matching [5], [93], [117] Simple and straightforward. Processing nonimportantpixels and slow, vulnera-ble to any font change, ro-tation, noise and thicknesschange.

Several templates for each character Be able to recognize tilted characters. More processing time.Usingextractedfeatures

Horizontal and vertical projections [101], [119] Be able to extract salient features, robustto any distortion, fast recognition sincethe number of features is smaller thanthat of the pixels.

Feature extraction takestime, nonrobust featureswill degrade the recogni-tion.

Hotelling transform [102]

The number of black pixels in each 3 × 3 pixels block [120]Count the number of elements that have certain degrees inclina-tion [97]The number of transitions from character to background andspacing between them [121]Sampling the character contour all around [122]Gabor filter [123]Kirsch edge detection [124]Convert the direction of the character strokes into one code [126]Pixels’ values of 11 subblocks [127]Nonoverlapping 5 × 5 blocks [128]Contour-crossing counts (CCs), directional counts (DCs), andperipheral background area (PBA) [49]Topological features of characters including the number of holes,endpoints, three-way nodes, and four-way nodes [52]

segmentation using different methods are shown when avail-able with processing speed.

Some characters are similar in their shape, such as (B-8),(O-0), (I-1), (A-4), (C-G), (D-O), and (K-X). These charactersconfuse the character recognizer, especially when they aredistorted. Dealing with this ambiguity problem should attractmore attention than regular OCR in future research.

V. Summary, Future Directions, and Conclusion

A. Summary

In general, an ALPR system consists of four processingstages. In the image acquisition stage, some points have tobe considered when choosing the ALPR system camera, suchas the camera resolution and the shutter speed. In the licenseplate extraction stage, the license plate is extracted based onsome features such as the color, the boundary, or the existenceof the characters. In the license plate segmentation stage, thecharacters are extracted by projecting their color information,by labeling them, or by matching their positions with template.Finally, the characters are recognized in the character recog-nition stage by template matching, or by classifiers such asneural networks and fuzzy classifiers. Automatic license platerecognition is quite challenging due to the different licenseplate formats and the varying environmental conditions. Thereare numerous ALPR techniques that have been proposed inrecent years. Table IV highlights some typical ALPR systemsperformance as presented in the literature. Issues, such as mainprocessing procedure, experimental database, processing time,and recognition rate, are provided. However, the authors of [4]pointed out that it is inappropriate to explicitly declare whichmethods demonstrate the highest performance since there is alack of uniform way to evaluate the methods. Therefore, in [4],

Anagnostopoulos et al. provided researchers with a commontest set to facilitate the systematic performance assessment.

B. Current Trends and Future Directions

Although significant progress of ALPR techniques has beenmade in the last few decades, there is still a lot of work tobe done since a robust system should work effectively undera variety of environmental conditions and plate conditions.

An effective ALPR system should have the ability to dealwith multistyle plates, e.g., different national plates withdifferent fonts and different syntax. Little existing researchhas addressed this issue, but still has some constraints. In[127], four critical factors were proposed to deal with themultistyle plate problem: plate rotation angle, character linenumber, the alphanumeric types used and character formats.Experimental results showed 90% overall success in a dataset of 16 800 images. The processing speed using the lowerresolution images is about 8 f/s. Reference [136] also proposedan approach that can deal with various national plates. Theoptical character recognition is managed by a hybrid strategy.An efficient probabilistic edit distance is used for providingan explicit video-based ALPR. Cognitive loops are introducedat critical stages of the algorithm.

In most ALPR systems, either the acquisition devices pro-vide still images only, or only some frames of the imagesequence are captured and analyzed independently. However,taking advantage of the temporal information of a video canhighly improve the system performance. Basically, using thetemporal information consists of tracking vehicles over timeto estimate the license plate motions and thus to make therecognition step more efficient. There are two kinds of strate-gies to achieve that goal. One strategy is using the trackingoutput to form a high resolution image by combining multiple,

Page 10: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

320 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

TABLE IV

Performance Comparison of Some Typical ALPR Systems [License Plate Extraction (LPE), License Plate Segmentation (LPS),

Optical Character Recognition (OCR)]

Methods Main Procedures DatabaseSize

Image Condi-tions

LPERate

LPSRate

OCRRate

Total Rate ProcessingTime

RealTime

Plateformat

LPE LPS OCR[2] Matching of

vertical edgesVertical projec-tion

Template matchingon hamming dis-tance

610images

640 × 480pixels, variousilluminationconditions,some angle ofview and dirtyplates

96.2% – – 95% – – SaudiArabianplates

[8] Vertical edges – – 1165 im-ages

384 × 288 pix-els

∼100%

– – – 47.9 ms Yes Chineseplates

[18] Block-basedprocessing

– Template matching 180 pairsof images

Multiplateswith occlusionand differentsizes

94.4% – 95.7% – 75ms forLPE

Yes Taiwaneseplates

[20] Houghtransformand contouralgorithm

Vertical andhorizontalprojections

Hidden Markovmodel(HMM)

805images

800 × 600pixels,differentrotationand lightingconditions

98.8% 97.6% 97.5% 92.9% 0.65 s forLPE and0.1 s forOCR

No Vietnameseplates

[21] GST – – 330images

Various view-ing directions

93.6% N/A N/A N/A 1.3 s No Koreanplates

[22] Edge detectionand verticaland horizontalprojections

Vertical andhorizontalprojections

Back propagationneural network

12 s video 320 × 240 pix-els

– – – 85.5%and∼ 100%afterretraining

100 ms Yes Taiwaneseplates

[5] Edgestatistics andmorphology

– – 9825 im-ages

768 × 534pixels,differentlightingconditions

99.6% – – – 100 ms Yes Chineseplates

[26] CCA – – 4 hrs +video

320 × 240pixels,degradedlow resolutionvideo

96.6% – – – 30 ms Yes Taiwaneseplates

[49] VQ – – 300+ im-ages

768 × 256pixels,differentbrightnessand sensorpositions

98% – – – 200 ms No Italianplates

[50] SCW SCW Two-layerprobabilistic neuralnetwork

1334 im-ages

Different back-ground and il-lumination

96.5% – 89.1% 86% 276 ms(111 msfor LPE,37 ms forLPS, and128 msfor OCR)

Yes Greekplates

[52] Gabor filter Local vectorquantization

– 300images

fixed angleand differentillumination

98% – 94.2% – 3.12 s No Multinational

[42] WT – – 315images

600 × 450 pix-els, different il-lumination andorientation

92.4% – – – – – Taiwaneseplates

[57] Haar-like fea-tures and Ad-aBoost

– – – 640 × 480 pix-els, live video

95.6% – – – – – Americanplates

[53] Local Haar-like featuresand AdaBoost

– – 160images

648 × 486pixels, variousconditions andview angles

93.5% – – – 80 ms Yes Australianplates

[58] Haar-like fea-tures and cas-cade AdaBoost

Peak-valleyanalysis

SVM based on CCs,DCs, and PBA

11 896images

640×480pixels,differentformat, sizeand lightingconditions

99.6% – – 98.3% 30 ms Yes Taiwaneseplates

Page 11: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 321

[62] Colorand fuzzyaggregation

Connectedcomponent andblob coloring

Self-organizingcharacterrecognition

1088 im-ages

Various sceneand conditions

97.9% – 95.6% 93.7% 0.4 s forLPE and2 s forOCR

No Taiwaneseplates

[71] Color, MeanShift

– – 57 images 324 × 243 pix-els

97.6% – – – 6 s No Australianplates

[75] Horizontalscan ofrepeatingcontrastchanges

Lateralhistogramanalysis.

Fully connectedfeedforwardartificial neuralnetwork withsigmoidal activationfunctions

– – 99% – 98% 80% 15 s No Italian plates

[82] Color, textureand TDNN

TDNN SVM 400 videoclips

640 × 480 pix-els

97.5% – 97.2% – 1 s No Koreanplates

[85] Rectangularshape, texture,and colorfeatures

Feature projec-tion

Template matching 1176 im-ages

640 × 480pixels, variousscenes andconditions

97.3% 95.7% – 93.1% 220 msfor LPEand 0.9 sfor OCR

No Chineseplates

[83] IR – – 2483 im-ages

Variousillumination,shadow, scale,rotation,and weatherconditions

99.3% – – – 300 ms No Iranianplates

[86] 2 CCDs and aprism

– Template matchingand normalizedcross correlation

1000 im-ages

Variousilluminationconditions

– – – 99%(conven-tional)97%(highlyinclined)

– – Japaneseplates

[93] Gradient anal-ysis

– Normalized crosscorrelation &grey-level templatematching

2340 im-ages

Differentweather andilluminationconditions

– – 98.6% 91.1% 1.1 s No Italian plates

[109] Scan line,textureproperties,color, andHoughtransform

Hybridbinarization,feedback selflearning

– 332images

867 × 623pixels(variousilluminationand differentdistances)

97.1% 96.4% – – 0.53 s forLPE and0.06 s forLPS

No Taiwaneseplates

[14] – Scan line andvertical projec-tion

– 30000+images

Titled – 99.2% – – 10–20 msfor LPS

Yes Chineseplates

[115] – Mathematicalmorphologyand adaptivesegmentationand fragmentmerging

– 1189 im-ages

Degraded withfragmentedand connectedcharacters

– 84.5% – – – – Japaneseplates

[106] Corner detec-tion and tem-plate matching

Vertical andhorizontalprojections

Hotelling transformand Euclidean dis-tance

1000+images

439 × 510 pix-els

– – – 99.6% – – Dutch plates

[108] Priorknowledgeof colorcollocation

Priorknowledgeof characterdimensions

Improved backpropagation neuralnetwork and priorknowledge of theplate layout

– – – – 97.7% – – – Chineseplates

subpixel shifted, low-resolution images. This technique isknown as super-resolution reconstruction [137]. Reference[138] proposed to detect the license plate using an AdaBoostclassifier, and to track it using a data-association approach.Reference [143] proposed a new reduced cost function toproduce images of higher resolution from low resolution framesequences. It can be employed for real time processing.

Alternative to super-resolution techniques, we can merge thehigh-level outputs of the recognition to make a final decision.For example, in [139], the authors presented a real-time video-based method utilizing post-processing of Kalman tracker. Inthis paper, Viola-Jones’ object detector is used to detect theplate position. The support vector machine is used to recognizecharacters. To fully make use of the video information, aKalman tracker is used to predict the plate positions in thesubsequent frames to reduce the detector searching area. The

final character recognition also uses the interframe informationto enhance the recognition performance.

The resolution of current ALPR video cameras is low.Recently, high definition cameras are adopted in licenseplate recognition systems since these cameras preserve ob-ject details at a longer distance from the camera. However,due to the large amount of information to be processed,the computational costs are high. To address this issue,Giannoukos et al. [140] introduced a scanning method, op-erator context scanning (OCS), which uses pixel operatorsin the form of a sliding window, associating a pixel and itsneighborhood to the possibility of belonging to the objectthat the method is searching. This OCS method increases theprocessing speed of the original SCW method by 250%.

In [141], on the basis of existing local binary patternoperator, the authors proposed a low-computational advanced

Page 12: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

322 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

linear binary pattern operator as feature extraction for low-resolution Chinese character recognition of vehicle licenseplate. Reference [142] also proposed a recognition methodof blurred vehicle license plate based on natural imagematting.

Segmentation and recognition are two important tasks inALPR. Traditionally, these two tasks were implemented ina cascade fashion independently and sequentially [145]. Re-cently, there has been an increasing interest in exploring theinteraction between the two tasks. For example, the priorknowledge on the characters to be recognized is employedfor segmentation [144] and the recognition outputs are fedback to the segmentation process [52]. Reference [145] pro-posed a two-layer Markov network to formulate the jointsegmentation and recognition problem in a 1-D case. Boththe low-level features and high-level knowledge are integratedinto a two-layer Markov network where the two tasks areachieved simultaneously as the results of the belief propagationinference.

Recently, license plate recognition has also been usedfor vehicle manufacturer and model recognition [146],[147].

There are many other open issues for the future research.1) The technical specifications of video surveillance equip-

ment vary as older systems may be equipped with lowresolution black and white cameras, and newer systemsare likely to be equipped with high resolution colorcameras. An effective ALPR system should be able tointegrate with varying existing surveillance equipment.

2) For video-based ALPR, we need to first extract theframes that have the passing cars. It needs either framedifferencing or motion detection. Extracting the correctframe with a clear car plate image is another challenge,especially when the car speed is very fast, violating thespeed limit.

3) To deal with the illumination problem, good pre-processing methods (image enhancement) should beused to remove the influence of lighting and to makethe license plate salient.

4) New sensing systems that are robust to the change ofillumination conditions should also be used to elevatethe ALPR performance.

5) For optical character recognition, future research shouldconcentrate on improving the recognition rate on am-biguous characters, such as (B-8), (O-0), (I-1), (A-4),(C-G), (D-O), (K-X), and broken characters.

6) To evaluate the performance of different ALPR sys-tems, a uniform evaluation way is needed. Besides acommon test set, we also need to set some regulationsfor performance comparison, such as how to define thecorrect extraction of license plate, what is the successfulsegmentation, how to calculate the character recogni-tion rate. Here, we suggest that the plate extractionis successful when all characters on it are shown; thecharacter segmentation is successful when the characterimage encloses the whole character; and the characterrecognition is successful when all the characters on aplate are correctly recognized.

C. Conclusion

This paper presented a comprehensive survey on existingALPR techniques by categorizing them according to thefeatures used in each stage. Comparisons of them in termsof pros, cons, recognition results, and processing speed wereaddressed. A future forecast for ALPR was also given atthe end. The future research of ALPR should concentrate onmultistyle plate recognition, video-based ALPR using tempo-ral information, multiplates processing, high definition plateimage processing, ambiguous-character recognition, and so on.

References

[1] G. Liu, Z. Ma, Z. Du, and C. Wen, “The calculation method of roadtravel time based on license plate recognition technology,” in Proc. Adv.Inform. Tech. Educ. Commun. Comput. Inform. Sci., vol. 201. 2011, pp.385–389.

[2] Y.-C. Chiou, L. W. Lan, C.-M. Tseng, and C.-C. Fan, “Optimallocations of license plate recognition to enhance the origin-destinationmatrix estimation,” in Proc. Eastern Asia Soc. Transp. Stu., vol. 8.2011, pp. 1–14.

[3] S. Kranthi, K. Pranathi, and A. Srisaila, “Automatic number platerecognition,” Int. J. Adv. Tech., vol. 2, no. 3, pp. 408–422, 2011.

[4] C.-N. E. Anagnostopoulos, I. E. Anagnostopoulos, I. D. Psoroulas, V.Loumos, and E. Kayafas, “License plate recognition from still imagesand video sequences: A survey,” IEEE Trans. Intell. Transp. Syst., vol.9, no. 3, pp. 377–391, Sep. 2008.

[5] M. Sarfraz, M. J. Ahmed, and S. A. Ghazi, “Saudi Arabian licenseplate recognition system,” in Proc. Int. Conf. Geom. Model. Graph.,2003, pp. 36–41.

[6] I. Paliy, V. Turchenko, V. Koval, A. Sachenko, and G. Markowsky,“Approach to recognition of license plate numbers using neural net-works,” in Proc. IEEE Int. Joint Conf. Neur. Netw., vol. 4. Jul. 2004,pp. 2965–2970.

[7] C. Nelson Kennedy Babu and K. Nallaperumal, “An efficient geometricfeature based license plate localization and recognition,” Int. J. ImagingSci. Eng., vol. 2, no. 2, pp. 189–194, 2008.

[8] H. Bai and C. Liu, “A hybrid license plate extraction method based onedge statistics and morphology,” in Proc. Int. Conf. Pattern Recognit.,vol. 2. 2004, pp. 831–834.

[9] D. Zheng, Y. Zhao, and J. Wang, “An efficient method of licenseplate location,” Pattern Recognit. Lett., vol. 26, no. 15, pp. 2431–2438,2005.

[10] S. Wang and H. Lee, “Detection and recognition of license platecharacters with different appearances,” in Proc. Int. Conf. Intell. Transp.Syst., vol. 2. 2003, pp. 979–984.

[11] F. Faradji, A. H. Rezaie, and M. Ziaratban, “A morphological-basedlicense plate location,” in Proc. IEEE Int. Conf. Image Process., vol.1. Sep.–Oct. 2007, pp. 57–60.

[12] K. Kanayama, Y. Fujikawa, K. Fujimoto, and M. Horino, “Developmentof vehicle-license number recognition system using real-time imageprocessing and its application to travel-time measurement,” in Proc.IEEE Veh. Tech. Conf., May 1991, pp. 798–804.

[13] V. Kamat and S. Ganesan, “An efficient implementation of the Houghtransform for detecting vehicle license plates using DSPs,” in Proc.Real-Time Tech. Applicat. Symp., 1995, pp. 58–59.

[14] C. Busch, R. Domer, C. Freytag, and H. Ziegler, “Feature basedrecognition of traffic video streams for online route tracing,” in Proc.IEEE Veh. Tech. Conf., vol. 3. May 1998, pp. 1790–1794.

[15] S. Zhang, M. Zhang, and X. Ye, “Car plate character extraction undercomplicated environment,” in Proc. IEEE Int. Conf. Syst. Man Cybern.,vol. 5. Oct. 2004, pp. 4722–4726.

[16] M. J. Ahmed, M. Sarfraz, A. Zidouri, and W. G. Al-Khatib, “Licenseplate recognition system,” in Proc. IEEE Int. Conf. Electron. CircuitsSyst., vol. 2. Dec. 2003, pp. 898–901.

[17] A. M. Al-Ghaili, S. Mashohor, A. Ismail, and A. R. Ramli, “A newvertical edge detection algorithm and its application,” in Proc. Int. Conf.Comput. Eng. Syst., 2008, pp. 204–209.

[18] H.-J. Lee, S.-Y. Chen, and S.-Z. Wang, “Extraction and recognition oflicense plates of motorcycles and vehicles on highways,” in Proc. Int.Conf. Pattern Recognit., 2004, pp. 356–359.

Page 13: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 323

[19] Y.-P. Huang, C.-H. Chen, Y.-T. Chang, and F. E. Sandnes, “An intelli-gent strategy for checking the annual inspection status of motorcyclesbased on license plate recognition,” Expert Syst. Applicat., vol. 36, pp.9260–9267, Jul. 2009.

[20] T. D. Duan, D. A. Duc, and T. L. H. Du, “Combining Hough transformand contour algorithm for detecting vehicles’ license-plates,” in Proc.Int. Symp. Intell. Multimedia Video Speech Process., 2004, pp. 747–750.

[21] T. D. Duan, T. L. H. Du, T. V. Phuoc, and N. V. Hoang, “Building anautomatic vehicle license-plate recognition system,” in Proc. Int. Conf.Comput. Sci. RIVF, 2005, pp. 59–63.

[22] D.-S. Kim and S.-I. Chien, “Automatic car license plate extraction usingmodified generalized symmetry transform and image warping,” in Proc.IEEE Int. Symp. Ind. Electron., vol. 3. Jun. 2001, pp. 2022–2027.

[23] J. Xu, S. Li, and Z. Chen, “Color analysis for Chinese car platerecognition,” in Proc. IEEE Int. Conf. Robot. Intell. Syst. SignalProcess., vol. 2. Oct. 2003, pp. 1312–1316.

[24] Z. Qin, S. Shi, J. Xu, and H. Fu, “Method of license plate location basedon corner feature,” in Proc. World Congr. Intell. Control Automat., vol.2. 2006, pp. 8645–8649.

[25] J. Matas and K. Zimmermann, “Unconstrained license plate and textlocalization and recognition,” in Proc. IEEE Int. Conf. Intell. Transp.Syst., Sep. 2005, pp. 225–230.

[26] B.-F. Wu, S.-P. Lin, and C.-C. Chiu, “Extracting characters from realvehicle license plates out-of-doors,” IET Comput. Vis., vol. 1, no. 1,pp. 2–10, 2007.

[27] N. Bellas, S. M. Chai, M. Dwyer, and D. Linzmeier, “FPGA implemen-tation of a license plate recognition SoC using automatically generatedstreaming accelerators,” in Proc. IEEE Int. Parallel Distributed Pro-cess. Symp., Apr. 2006, pp. 8–15.

[28] P. Wu, H.-H. Chen, R.-J. Wu, and D.-F. Shen, “License plate extractionin low resolution video,” in Proc. Int. Conf. Pattern Recognit., vol. 1.2006, pp. 824–827.

[29] M. M. I. Chacon and S. A. Zimmerman, “License plate location basedon a dynamic PCNN scheme,” in Proc. Int. Joint Conf. Neural Netw.,vol. 2. 2003, pp. 1195–1200.

[30] K. Miyamoto, K. Nagano, M. Tamagawa, I. Fujita, and M. Yamamoto,“Vehicle license-plate recognition by image analysis,” in Proc. Int.Conf. Ind. Electron. Control Instrum., vol. 3. 1991, pp. 1734–1738.

[31] Y. S. Soh, B. T. Chun, and H. S. Yoon, “Design of real time vehicleidentification system,” in Proc. IEEE Int. Conf. Syst. Man Cybern., vol.3. Oct. 1994, pp. 2147–2152.

[32] R. Parisi, E. D. D. Claudio, G. Lucarelli, and G. Orlandi, “Car platerecognition by neural networks and image processing,” in Proc. IEEEInt. Symp. Circuits Syst., vol. 3. Jun. 1998, pp. 195–198.

[33] V. S. L. Nathan, J. Ramkumar, and S. K. Priya, “New approachesfor license plate recognition system,” in Proc. Int. Conf. Intell. Sens.Inform. Process., 2004, pp. 149–152.

[34] V. Seetharaman, A. Sathyakhala, N. L. S. Vidhya, and P. Sunder, “Li-cense plate recognition system using hybrid neural networks,” in Proc.IEEE Annu. Meeting Fuzzy Inform., vol. 1. Jun. 2004, pp. 363–366.

[35] C. Anagnostopoulos, T. Alexandropoulos, S. Boutas, V. Loumos, andE. Kayafas, “A template-guided approach to vehicle surveillance andaccess control,” in Proc. IEEE Conf. Adv. Video Signal Based Survei.,Sep. 2005, pp. 534–539.

[36] C.-T. Hsieh, Y.-S. Juan, and K.-M. Hung, “Multiple license platedetection for complex background,” in Proc. Int. Conf. Adv. Inform.Netw. Applicat., vol. 2. 2005, pp. 389–392.

[37] F. Yang and Z. Ma, “Vehicle license plate location based onhistogramming and mathematical morphology,” in Proc. IEEEWorkshop Automa. Identification Adv. Tech., Oct. 2005, pp. 89–94.

[38] R. Bremananth, A. Chitra, V. Seetharaman, and V. S. L. Nathan, “Arobust video based license plate recognition system,” in Proc. Int.Conf. Intell. Sensing Inform. Process., 2005, pp. 175–180.

[39] H.-K. Xu, F.-H. Yu, J.-H. Jiao, and H.-S. Song, “A new approach ofthe vehicle license plate location,” in Proc. Int. Conf. Parall. Distr.Comput. Applicat. Tech., Dec. 2005, pp. 1055–1057.

[40] R. Zunino and S. Rovetta, “Vector quantization for license-platelocation and image coding,” IEEE Trans. Ind. Electron., vol. 47, no.1, pp. 159–167, Feb. 2000.

[41] C.-N. E. Anagnostopoulos, I. E. Anagnostopoulos, V. Loumos, andE. Kayafas, “A license plate-recognition algorithm for intelligenttransportation system applications,” IEEE Trans. Intell. Trans. Syst.,vol. 7, no. 3, pp. 377–392, Sep. 2006.

[42] K. Deb, H.-U. Chae, and K.-H. Jo, “Vehicle license plate detectionmethod based on sliding concentric windows and histogram,” J.Comput., vol. 4, no. 8, pp. 771–777, 2009.

[43] H. Caner, H. S. Gecim, and A. Z. Alkar, “Efficient embeddedneural-network-based license plate recognition system,” IEEE Trans.Veh. Tech., vol. 57, no. 5, pp. 2675–2683, Sep. 2008.

[44] F. Kahraman, B. Kurt, and M. Gokmen, License Plate CharacterSegmentation Based on the Gabor Transform and Vector Quantization,vol. 2869. New York: Springer-Verlag, 2003, pp. 381–388.

[45] Y.-R. Wang, W.-H. Lin, and S.-J. Horng, “A sliding window techniquefor efficient license plate localization based on discrete wavelettransform,” Expert Syst. Applicat., vol. 38, pp. 3142–3146, Oct. 2010.

[46] H. Zhang, W. Jia, X. He, and Q. Wu, “Learning-based license platedetection using global and local features,” in Proc. Int. Conf. PatternRecognit., vol. 2. 2006, pp. 1102–1105.

[47] W. Le and S. Li, “A hybrid license plate extraction method for complexscenes,” in Proc. Int. Conf. Pattern Recognit., vol. 2. 2006, pp. 324–327.

[48] L. Dlagnekov, License Plate Detection Using AdaBoost. San Diego,CA: Computer Science and Engineering Dept., 2004

[49] S. Z. Wang and H. J. Lee, “A cascade framework for a real-timestatistical plate recognition system,” IEEE Trans. Inform. ForensicsSecurity, vol. 2, no. 2, pp. 267–282, Jun. 2007.

[50] X. Shi, W. Zhao, and Y. Shen, “Automatic license plate recognitionsystem based on color image processing,” Lecture Notes Comput. Sci.,vol. 3483, pp. 1159–1168, 2005.

[51] E. R. Lee, P. K. Kim, and H. J. Kim, “Automatic recognition of a carlicense plate using color image processing,” in Proc. IEEE Int. Conf.Image Process., vol. 2. Nov. 1994, pp. 301–305.

[52] S.-L. Chang, L.-S. Chen, Y.-C. Chung, and S.-W. Chen, “Automaticlicense plate recognition,” IEEE Trans. Intell. Transp. Syst., vol. 5,no. 1, pp. 42–53, Mar. 2004.

[53] S. K. Kim, D. W. Kim, and H. J. Kim, “A recognition of vehiclelicense plate using a genetic algorithm based segmentation,” in Proc.Int. Conf. Image Process., vol. 2. 1996, pp. 661–664.

[54] S. Yohimori, Y. Mitsukura, M. Fukumi, N. Akamatsu, and N. Pedrycz,“License plate detection system by using threshold function andimproved template matching method,” in Proc. IEEE Annu. MeetingFuzzy Inform., vol. 1. Jun. 2004, pp. 357–362.

[55] W. Jia, H. Zhang, X. He, and Q. Wu, “Gaussian weighted histogramintersection for license plate classification,” in Proc. Int. Conf. PatternRecognit., vol. 3. 2006, pp. 574–577.

[56] Y.-Q. Yang, J. B. R.-L. Tian, and N. Liu, “A vehicle license platerecognition system based on fixed color collocation,” in Proc. Int.Conf. Mach. Learning Cybern., vol. 9. 2005, pp. 5394–5397.

[57] W. Jia, H. Zhang, X. He, and M. Piccardi, “Mean shift for accuratelicense plate localization,” in Proc. IEEE Conf. Intell. Transp. Syst.,Sep. 2005, pp. 566–571.

[58] W. Jia, H. Zhang, and X. He, “Region-based license plate detection,”J. Netw. Comput. Applicat., vol. 30, no. 4, pp. 1324–1333, 2007.

[59] L. Pan and S. Li, “A new license plate extraction framework basedon fast mean shift,” vol. 7820, pp. 782007-1-782007-9, Aug. 2010.

[60] F. Wang, L. Man, B. Wang, Y. Xiao, W. Pan, and X. Lu, “Fuzzy-basedalgorithm for color recognition of license plates,” Pattern Recognit.Lett., vol. 29, no. 7, pp. 1007–1020, 2008.

[61] X. Wan, J. Liu, and J. Liu, “A vehicle license plate localizationmethod using color barycenters hexagon model,” Proc. SPIE, vol.8009, pp. 80092O-1–80092O-5, Jul. 2011.

[62] K. Deb and K.-H. Jo, “A vehicle license plate detection method forintelligent transportation system applications,” Cybern. Syst. Int. J.,vol. 40, no. 8, pp. 689–705, 2009.

[63] J. Matas and K. Zimmermann, “Unconstrained license plate and textlocalization and recognition,” in Proc. IEEE Conf. Intell. Transp. Syst.,Sep. 2005, pp. 572–577.

[64] S. Draghici, “A neural network based artificial vision system for licenseplate recognition,” Int. J. Neural Syst., vol. 8, no. 1, pp. 113–126, 1997.

[65] F. Alegria and P. S. Girao, “Vehicle plate recognition for wirelesstraffic control and law enforcement system,” in Proc. IEEE Int. Conf.Ind. Tech., Dec. 2006, pp. 1800–1804.

[66] H. Hontani and T. Koga, “Character extraction method without priorknowledge on size and position information,” in Proc. IEEE Int. Veh.Electron. Conf., Sep. 2001, pp. 67–72.

[67] B. K. Cho, S. H. Ryu, D. R. Shin, and J. I. Jung, “License plateextraction method for identification of vehicle violations at a railwaylevel crossing,” Int. J. Automot. Tech., vol. 12, no. 2, pp. 281–289, 2011.

[68] W. T. Ho, H. W. Lim, Y. H. Tay, and Q. Binh, “Two-stage licenseplate detection using gentle Adaboost and SIFT-SVM,” in Proc. 1stAsian Conf. Intell. Inform. Database Syst., 2009, pp. 109–114.

[69] H. W. Lim and Y. H. Tay, “Detection of license plate characters in natu-ral scene with MSER and SIFT unigram classifier,” in Proc. IEEE Conf.Sustainable Utilization Development Eng. Tech., Nov. 2010, pp. 95–98.

Page 14: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

324 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 23, NO. 2, FEBRUARY 2013

[70] J. A. G. Nijhuis, M. H. T. Brugge, K. A. Helmholt, J. P. W. Pluim,L. Spaanenburg, R. S. Venema, and M. A. Westenberg, “Car licenseplate recognition with neural networks and fuzzy logic,” in Proc.IEEE Int. Conf. Neur. Netw., vol. 5. Dec. 1995, pp. 2232–2236.

[71] M. H. T. Brugge, J. H. Stevens, J. A. G. Nijhuis, and L. Spaanenburg,“License plate recognition using DTCNNs,” in Proc. IEEE Int. Work-shop Cellular Neur. Netw. Their Applicat., Apr. 1998, pp. 212–217.

[72] J.-F. Xu, S.-F. Li, and M.-S. Yu, “Car license plate extraction usingcolor and edge information,” in Proc. Int. Conf. Mach. Learn. Cybern.,vol. 6. 2004, pp. 3904–3907.

[73] S. H. Park, K. I. Kim, K. Jung, and H. J. Kim, “Locating car licenseplates using neural networks,” Electron. Lett., vol. 35, no. 17, pp.1475–1477, 1999.

[74] K. K. Kim, K. I. Kim, J. B. Kim, and H. J. Kim, “Learning-basedapproach for license plate recognition,” in Proc. IEEE Signal Process.Soc. Workshop Neur. Netw. Signal Process., vol. 2. Dec. 2000, pp.614–623.

[75] M.-L. Wang, Y.-H. Liu, B.-Y. Liao, Y.-S. Lin, and M.-F. Horng, “Avehicle license plate recognition system based on spatial/frequencydomain filtering and neural networks,” in Proc. Comput. CollectiveIntell. Tech. Applicat., LNCS 6423. 2010, pp. 63–70.

[76] M.-K. Wu, l.-S. Wei, H.-C. Shih, and C. C. Ho, “License platedetection based on 2-level 2-D Haar wavelet transform and edgedensity verification,” in Proc. IEEE Int. Symp. Ind. Electron., Jul.2009, pp. 1699–1704.

[77] Y. Lee, T. Song, B. Ku, S. Jeon, D. K. Han, and H. Ko, “License platedetection using local structure patterns,” in Proc. IEEE Int. Conf. Adv.Video Signal Based Surveillance, Sep. 2010, pp. 574–579.

[78] S. Mao, X. Huang, and M. Wang, “An adaptive method for Chineselicense plate location,” in Proc. World Congr. Intell. Control Automat.,2010, pp. 6173–6177.

[79] H. Mahini, S. Kasaei, and F. Dorri, “An efficient features-based licenseplate localization method,” in Proc. Int. Conf. Pattern Recognit., vol.2. 2006, pp. 841–844.

[80] F. Porikli and T. Kocak, “Robust license plate detection usingcovariance descriptor in a neural network framework,” in Proc. IEEEInt. Conf. Video Signal Based Surveillance, Nov. 2006, p. 107.

[81] Z. Chen, C. Liu, F. Chang, and G. Wang, “Automatic license platelocation and recognition based on feature salience,” IEEE Trans. Veh.Tech., vol. 58, no. 7, pp. 3781–3785, 2009.

[82] T. Naito, T. Tsukada, K. Yamada, K. Kozuka, and S. Yamamoto,“Robust license-plate recognition method for passing vehicles underoutside environment,” IEEE Trans. Veh. Tech., vol. 49, no. 6, pp.2309–2319, Nov. 2000.

[83] C. Anagnostopoulos, T. Alexandropoulos, V. Loumos, and E. Kayafas,“Intelligent traffic management through MPEG-7 vehicle flowsurveillance,” in Proc. IEEE Int. Symp. Modern Comput., Oct. 2006,pp. 202–207.

[84] [Online]. Available: http://www.fedsig.com/solutions/what-is-alpr[85] [Online]. Available: https://www.research.ibm.com/haifa/research.shtml[86] [Online]. Available: http://www.nedapavi.com/solutions/cases/choo-

singbetween-anpr-and-transponder-based-vehicle-id.html[87] [Online]. Available: http://www.ezcctv.com/license-plate-recognition.

htm[88] N. Otsu, “A threshold selection method for gray level histograms,”

IEEE Trans. Syst. Man Cybern., vol. 9, no. 1, pp. 62–66, Jan. 1979.[89] X. Xu, Z. Wang, Y. Zhang, and Y. Liang, “A method of multiview

vehicle license plates location based on rectangle features,” in Proc.Int. Conf. Signal Process., vol. 3. 2006, pp. 16–20.

[90] M.-S. Pan, J.-B. Yan, and Z.-H. Xiao, “Vehicle license plate charactersegmentation,” Int. J. Automat. Comput., vol. 5, no. 4, pp. 425–432,2008.

[91] M.-S. Pan, Q. Xiong, and J.-B. Yan, “A new method for correctingvehicle license plate tilt,” Int. J. Automat. Comput., vol. 6, no. 2, pp.210–216, 2009.

[92] K. Deb, A. Vavilin, J.-W. Kim, T. Kim, and K.-H. Jo, “Projection andleast square fitting with perpendicular offsets based vehicle licenseplate tilt correction,” in Proc. SICE Annu. Conf., 2010, pp. 3291–3298.

[93] P. Comelli, P. Ferragina, M. N. Granieri, and F. Stabile, “Opticalrecognition of motor vehicle license plates,” IEEE Trans. Veh. Tech.,vol. 44, no. 4, pp. 790–799, Nov. 1995.

[94] Y. Zhang and C. Zhang, “A new algorithm for character segmentation oflicense plate,” in Proc. IEEE Intell. Veh. Symp., Jun. 2003, pp. 106–109.

[95] D. Llorens, A. Marzal, V. Palazon, and J. M. Vilar, “Car license platesextraction and recognition based on connected components analysisand HMM decoding,” Lecture Notes Comput. Sci., vol. 3522, pp.571–578, 2005.

[96] C. Coetzee, C. Botha, and D. Weber, “PC based number platerecognition system,” in Proc. IEEE Int. Symp. Ind. Electron., Jul.1998, pp. 605–610.

[97] T. Nukano, M. Fukumi, and M. Khalid, “Vehicle license platecharacter recognition by neural networks,” in Proc. Int. Symp. Intell.Signal Process. Commun. Syst., 2004, pp. 771–775.

[98] V. Shapiro and G. Gluhchev, “Multinational license plate recognitionsystem: Segmentation and classification,” in Proc. Int. Conf. PatternRecognit., vol. 4. 2004, pp. 352–355.

[99] B.-F. Wu, S.-P. Lin, and C.-C. Chiu, “Extracting characters from realvehicle license plates out-of-doors,” IET Comput. Vision, vol. 1, no.1, pp. 2–10, 2007.

[100] Y. Cheng, J. Lu, and T. Yahagi, “Car license plate recognition basedon the combination of principal component analysis and radial basisfunction networks,” in Proc. Int. Conf. Signal Process., 2004, pp.1455–1458.

[101] C. A. Rahman, W. Badawy, and A. Radmanesh, “A real time vehicle’slicense plate recognition system,” in Proc. IEEE Conf. Adv. VideoSignal Based Surveillance, Jul. 2003, pp. 163–166.

[102] H. A. Hegt, R. J. Haye, and N.A. Khan, “A high performance licenseplate recognition system,” in Proc. IEEE Int. Conf. Syst. Man Cybern.,vol. 5. Oct. 1998, pp. 4357–4362.

[103] B. Shan, “Vehicle license plate recognition based on text-lineconstruction and multilevel RBF neural network,” J. Comput., vol. 6,no. 2, pp. 246–253, 2011.

[104] J. Barroso, E. Dagless, A. Rafael, and J. Bulas-Cruz, “Numberplate reading using computer vision,” in Proc. IEEE Int. Symp. Ind.Electron., Jul. 1997, pp. 761–766.

[105] Q. Gao, X. Wang, and G. Xie, “License plate recognition based onprior knowledge,” in Proc. IEEE Int. Conf. Automat. Logistics, Aug.2007, pp. 2964–2968.

[106] J.-M. Guo and Y.-F. Liu, “License plate localization and charactersegmentation with feedback self-learning and hybrid binarizationtechniques,” IEEE Trans. Veh. Tech., vol. 57, no. 3, pp. 1417–1424,May 2008.

[107] K. B. Kim, S. W. Jang, and C. K. Kim, “Recognition of car licenseplate by using dynamical thresholding method and enhanced neuralnetworks,” Comput. Anal. Images Patterns, vol. 2756, pp. 309–319,Aug. 2003.

[108] A. Capar and M. Gokmen, “Concurrent segmentation and recognitionwith shape-driven fast marching methods,” in Proc. Int. Conf. PatternRecognit., vol. 1. 2006, pp. 155–158.

[109] J. A. Sethian, “A fast marching level set method for monotonicallyadvancing fronts,” Natl. Acad. Sci., vol. 93, no. 4, pp. 1591–1595,1996.

[110] P. Stec and M. Domanski, “Efficient unassisted video segmentationusing enhanced fast marching,” in Proc. Int. Conf. Image Process.,vol. 2. 2003, pp. 427–430.

[111] S. Nomura, K. Yamanaka, O. Katai, H. Kawakami, and T. Shiose, “Anovel adaptive morphological approach for degraded character imagesegmentation,” Pattern Recognit., vol. 38, no. 11, pp. 1961–1975,2005.

[112] S. Nomura, K. Yamanaka, O. Katai, and H. Kawakami, “A newmethod for degraded color image binarization based on adaptivelightning on gray scale versions,” IEICE Trans. Inform. Syst., vol.E87-D, no. 4, pp. 1012–1020, 2004.

[113] P. Soille, Morphological Image Analysis: Principles and Applications.Berlin, Germany: Springer-Verlag, 1999.

[114] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Reading,MA: Addison-Wesley, 1993.

[115] D.-J. Kang, “Dynamic programming-based method for extraction oflicense plate numbers of speeding vehicle on the highway,” Int. J.Automotive Tech., vol. 10, no. 2, pp. 205–210, 2009.

[116] S. Tang and W. Li, “Number and letter character recognition of vehiclelicense plate based on edge Hausdorff distance,” in Proc. Int. Conf.Parallel Distributed Comput. Applicat. Tech., 2005, pp. 850–852.

[117] X. Lu, X. Ling, and W. Huang, “Vehicle license plate characterrecognition,” in Proc. Int. Conf. Neur. Netw. Signal Process., vol. 2.2003, pp. 1066–1069.

[118] T. Naito, T. Tsukada, K. Yamada, K. Kozuka, and S. Yamamoto,“License plate recognition method for inclined plates outdoors,” inProc. Int. Conf. Inform. Intell. Syst., 1999, pp. 304–312.

[119] Y. Dia, N. Zheng, X. Zhang, and G. Xuan, “Automatic recognition ofprovince name on the license plate of moving vehicle,” in Proc. Int.Conf. Pattern Recognit., vol. 2. 1988, pp. 927–929.

[120] F. Aghdasi and H. Ndungo, “Automatic license plate recognitionsystem,” in Proc. AFRICON Conf. Africa, vol. 1. 2004, pp. 45–50.

Page 15: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO ... · and pattern recognition. ALPR is also known as automatic vehicle identification, car plate recognition, automatic number

DU et al.: ALPR: STATE-OF-THE-ART REVIEW 325

[121] R. Juntanasub and N. Sureerattanan, “A simple OCR method fromstrong perspective view,” in Proc. Appl. Imagery Pattern Recognit.Workshop, 2004, pp. 235–240.

[122] M.-A. Ko and Y.-M. Kim, “Multifont and multisize character recogni-tion based on the sampling and quantization of an unwrapped contour,”in Proc. Int. Conf. Pattern Recognit., vol. 3. 1996, pp. 170–174.

[123] M.-K. Kim and Y.-B. Kwon, “Recognition of gray character usingGabor filters,” in Proc. Int. Conf. Inform. Fusion, vol. 1. 2002, pp.419–424.

[124] S. N. H. S. Abdullah, M. Khalid, R. Yusof, and K. Omar, “Licenseplate recognition using multicluster and multilayer neural networks,”Inform. and Commun. Tech., vol. 1, pp. 1818–1823, Apr. 2006.

[125] S. N. H. S. Abdullah, M. Khalid, R. Yusof, and K. Omar, “Comparisonof feature extractors in license plate recognition,” in Proc. Asia Int.Conf. Modeling Simul., 2007, pp. 502–506.

[126] P. Duangphasuk and A. Thammano, “Thai vehicle license platerecognition using the hierarchical cross-correlation ARTMAP,” inProc. IEEE Int. Conf. Intell. Syst., Sep. 2006, pp. 652–655.

[127] J. Jiao, Q. Ye, and Q. Huang, “A configurable method for multistylelicense plate recognition,” Pattern Recognit., vol. 42, no. 3, pp.358–369, 2009.

[128] Y. Amit, D. Geman, and X. Fan, “A coarse-to-fine strategy formulticlass shape detection,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 26, no. 12, pp. 1606–1621, Dec. 2004.

[129] Y. Amit, “A neural network architecture for visual selection,” NeuralComput., vol. 12, no. 5, pp. 1059–1082, 2000.

[130] Y. Amit and D. Geman, “A computational model for visual selection,”Neural Comput., vol. 11, no. 7, pp. 1691–1715, 1999.

[131] P. Zhang and L. H. Chen, “A novel feature extraction method andhybrid tree classification for handwritten numeral recognition,” PatternRecognit. Lett., vol. 23, no. 1, pp. 45–56, 2002.

[132] H. E. Kocer and K. K. Cevik, “Artificial neural networks based vehiclelicense plate recognition,” in Proc. Comput. Sci., vol. 3. 2011, pp.1033–1037.

[133] C. J. Ahmad and M. Shridhar, “Recognition of handwritten numeralswith multiple feature and multistage classifier,” Pattern Recognit., vol.2, no. 28, pp. 153–160, 1995.

[134] Y. S. Huang and C. Y. Suen, “A method of combining multiple expertsfor the recognition of unconstrained handwritten numerals,” IEEETrans. Pattern Anal. Mach. Intell., vol. 17, no. 1, pp. 90–93, Jan. 1995.

[135] H. J. Kang and J. Kim, “Probabilistic framework for combiningmultiple classifier at abstract level,” in Proc. Int. Conf. DocumentAnal. Recognit., vol. 1. 1997, pp. 870–874.

[136] N. Thome and L. Robinault, “A cognitive and video-based approachfor multinational license plate recognition,” Mach. Vision Applicat.,vol. 22, no. 2, pp. 389–407, 2011.

[137] K. V. Suresh, G. M. Kumar, and A. N. Rajagopalan, “Superresolutionof license plates in real traffic videos,” IEEE Trans. Intell. Transp.Syst., vol. 8, no. 2, pp. 321–331, 2007.

[138] L. Dlagnekov, “Recognizing cars,” Dept. Comput. Sci. Eng., Univ.California, San Diego, Tech. Rep. CS2005-0833, 2005.

[139] C. Arth, F. Limberger, and H. Bischof, “Real-time license platerecognition on an embedded DSP-platform,” in Proc. IEEE Conf.Comput. Vision Pattern Recognit., Jun. 2007, pp. 1–8.

[140] I. Giannoukos, C.-N. Anagnostopoulosa, V. Loumosa, and E. Kayafasa,“Operator context scanning to support high segmentation rates for realtime license plate recognition,” Pattern Recognit., vol. 43, no. 11, pp.3866–3878, 2010.

[141] Y. Wang, H. Zhang, X. Fang, and J. Guo, “Low-resolution Chinesecharacter recognition of vehicle license plate based on ALBP andGabor filters,” in Proc. Int. Conf. Adv. Pattern Recognit., 2009, pp.302–305.

[142] F. Liang, Y. Liu, and G. Yao, “Recognition of blurred license plate ofvehicle based on natural image matting,” Proc. SPIE, vol. 7495, pp.749527-1–749527-6, Oct. 2009.

[143] J. Yuan, S.-D. Du, and X. Zhu, “Fast super-resolution for license plateimage reconstruction,” in Proc. Int. Conf. Pattern Recognit., 2008, pp.1–4.

[144] X. Jia, X. Wang, W. Li, and H. Wang, “A novel algorithm for charactersegmentation of degraded license plate based on prior knowledge,” inProc. IEEE Int. Conf. Automat. Logistics, Aug. 2007, pp. 249–253.

[145] X. Fan and G. Fan, “Graphical models for joint segmentation andrecognition of license plate characters,” IEEE Signal Process. Lett.,vol. 16, no. 1, pp. 10–13, Jan. 2009.

[146] A. Psyllos, C. N. Anagnostopoulos, and E. Kayafas, “Vehiclemodel recognition from frontal view image measurements,” Comput.Standards Interfaces, vol. 33, no. 2, pp. 142–151, 2011.

[147] A. P. Psyllos, C.-N. E. Anagnostopoulos, and E. Kayafas, “Vehicle logorecognition using a SIFT-based enhanced matching scheme,” IEEETrans. Intell. Transp. Syst., vol. 11, no. 2, pp. 322–328, Jun. 2010.

Shan Du (S’05–M’12) received the M.S. degree inelectrical and computer engineering from the Uni-versity of Calgary, Calgary, AB, Canada, in 2002,and the Ph.D. degree in electrical and computerengineering from the University of British Columbia,Vancouver, BC, Canada, in 2008.

She has been a Research Scientist with IntelliViewTechnologies, Inc., Calgary, since 2009. She hasauthored more than 20 international journal andconference papers. Her current research interestsinclude pattern recognition, computer vision, and

image or video processing.

Mahmoud Ibrahim received the M.S. degree inelectrical and computer engineering from the Uni-versity of Calgary, Calgary, AB, Canada, in 2007.

He is currently an Engineer with IntelliView Tech-nologies, Inc., Calgary.

Mohamed Shehata (SM’11) received the B.Sc. andM.Sc. degrees from Zagazig University, Zagazig,Egypt, in 1996 and 2001, respectively, and the Ph.D.degree from the Department of Electrical and Com-puter Engineering, University of Calgary, Calgary,AB, Canada.

He is currently an Assistant Professor with theDepartment of Electrical and Computer Engineering,Faculty of Engineering, Benha University, Cairo,Egypt. He was previously a Post-Doctoral Fellowwith the Laboratory for Integrated Video Systems,

directing a project funded by the City of Calgary, Alberta Infrastructure andTransportation, and Transport Canada. He has authored more than 40 refereedpapers and holds three patents. His current research interests include softwaredevelopment in real-time systems, embedded software systems, image or videoprocessing, and computer vision.

Wael Badawy (SM’07) received the B.Sc. andM.Sc. degrees from Alexandria University, Alexan-dria, Egypt, in 1994 and 1996, respectively, and theM.Sc. and Ph.D. degrees from the Center for Ad-vanced Computer Studies, University of Louisiana,Lafayette, in 1998 and 2000, respectively.

He is currently a Professor with the Departmentof Computer Engineering, College of Computer andInformation Technology, Umm Al-Qura University,Makkah, Saudi Arabia. He is also the President ofIntelliView Technologies, Inc., Calgary, AB, Canada.

He has been a Professor and an iCore Chair Associate with the University ofCalgary, Calgary. He is a leading Researcher in video surveillance technology.He has published more than 400 peer-reviewed technical papers and made over50 contributions to the development of the ISO standards, which is more than75% of the hardware reference model for the H.264 compression standard. Heis listed as a Primary Contributor in the VSI Alliance, developing the platform-based design definitions and taxonomy, PBD 11.0, in 2003. He has authored13 books and papers in conference proceedings. He is a co-author of theinternational video standards known as MPEG4/H.264. He holds eight patentsand has 13 patent applications in the areas of video systems and architectures.

Dr. Badawy represents Canada in ISO/TC223 as the Societal SecurityChairman of the Canadian Advisory Committee on ISO/IEC/JTC1/SC6Telecommunications and Information Exchange Between Systems and as theHead of the Canadian Delegation. He has received over 61 international andnational awards for his technical and commercial work, innovations, andcontributions to industry, academia, and society. He enjoys giving back asa mentor in the Canadian Youth Business Foundation, supporting Canadiansunder 34 in starting and building businesses.