robust traffic sign recognition based on color global … · robust traffic sign recognition...

12
1466 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014 Robust Traffic Sign Recognition Based on Color Global and Local Oriented Edge Magnitude Patterns Xue Yuan, Xiaoli Hao, Houjin Chen, and Xueye Wei Abstract—Most of the existing traffic sign recognition (TSR) systems make use of the inner region of the signs or the local fea- tures such as Haar, histograms of oriented gradients (HOG), and scale-invariant feature transform for recognition, whereas these features are still limited to deal with the rotation, illumination, and scale variations situations. A good feature of a traffic sign is desired to be discriminative and robust. In this paper, a novel Color Global and Local Oriented Edge Magnitude Pattern (Color Global LOEMP) is proposed. The Color Global LOEMP is a framework that is able to effectively combine color, global spatial structure, global direction structure, and local shape information and balance the two concerns of distinctiveness and robustness. The contributions of this paper are as follows: 1) color angular patterns are proposed to provide the color distinguishing infor- mation; 2) a context frame is established to provide global spatial information, due to the fact that the context frame is established by the shape of the traffic sign, thus allowing the cells to be aligned well with the inside part of the traffic sign even when rotation and scale variations occur; and 3) a LOEMP is proposed to represent each cell. In each cell, the distribution of the orientation patterns is described by the HOG feature, and then, each direction of HOG is represented in detail by the occurrence of local binary pattern histogram in this direction. Experiments are performed to validate the effectiveness of the proposed approach with TSR systems, and the experimental results are satisfying, even for images containing traffic signs that have been rotated, damaged, altered in color, or undergone affine transformations or images that were photographed under different weather or illumination conditions. Index Terms—Histogram of oriented gradient (HOG), local binary pattern (LBP), rotation invariant, traffic sign recognition (TSR). I. I NTRODUCTION A T PRESENT, intelligent transportation system technology is developing at a very rapid pace. Traffic problems, such as driving safety, city traffic congestion, and transportation efficiency, are expected to be alleviated through the application Manuscript received February 25, 2013; revised July 17, 2013, October 8, 2013, and December 14, 2013; accepted January 6, 2014. Date of publica- tion May 5, 2014; date of current version August 1, 2014. This work was supported in part by the Specialized Research Fund for the Doctoral Program of Higher Education under Grants 20110009120003 and 20110009110001, by the National Natural Science Foundation of China under Grants 61301186 and 61271305, and by the School Foundation of Beijing Jiaotong University under Grants W11JB00460 and 2010JBZ010. The Associate Editor for this paper was S. S. Nedevschi. X. Yuan is with the School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing 100044, China, and also with the Chinese Academy of Surveying and Mapping, Beijing 100830, China (e-mail: xyuan@ bjtu.edu.cn). X. Hao, H. Chen, and X. Wei are with the School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing 100044, China. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TITS.2014.2298912 of information technology and the intelligent transportation of vehicles. As an important subsystem in intelligent transporta- tion system technology, traffic sign recognition (TSR) systems based on computer vision have gradually become an important research topic in the field of intelligent transportation system technology [1]–[10]. Traffic sign images are generally obtained from outdoor natural scenes by means of cameras installed on vehicles, and then, the images are input to a computer for processing. Due to the many types of complicated factors present outdoors, outdoor environments are much more complex and challenging than indoor systems. The main difficulties of TSR systems are how to extract the robust descriptor with rich information in accordance with various lighting conditions, shape rotations, affine transformations, dimension changes, and so on. Through in-depth study of traffic sign data sets, some com- mon characteristics of traffic signs may be observed, such as the following, for example. 1) Some traffic signs have the same local features but differ- ent background colors [see Fig. 1(a)]. 2) Some traffic signs have the same local features and back- ground colors but different distributions of global shapes [see Fig. 1(b)]. 3) Some traffic signs share the same components but have different meanings [see Fig. 1(c)]. The proposed system consists of the following two stages: detection performed using maximally stable extremal regions (MSERs) and recognition performed by the novel Color Global and Local Oriented Edge Magnitude Pattern (Color Global LOEMP) features, which are classified using a support vector machine (SVM). To the best of our knowledge, this is the first paper that adopts local binary pattern (LBP)-based features for TSR, which are more discriminative and robust in han- dling shape rotations, various illumination conditions, and scale changes from traffic sign images than the existing systems. The remainder of this paper is organized as follows. In Section II, we review previous work and describe our improvements. Section III presents the traffic sign detection algorithm, and Section IV details the Color Global LOEMP feature extraction. Recognition based on SVMs is presented in Section V. The experimental results are presented in Section VI. Finally, a conclusion is presented in Section VII. II. RELATED WORK In recent years, research with regard to TSR has grown rapidly due to the significant need for such systems in future vehicles. The most common approach consists of two main 1524-9050 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: lynhu

Post on 05-May-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

1466 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014

Robust Traffic Sign Recognition Based on ColorGlobal and Local Oriented Edge Magnitude Patterns

Xue Yuan, Xiaoli Hao, Houjin Chen, and Xueye Wei

Abstract—Most of the existing traffic sign recognition (TSR)systems make use of the inner region of the signs or the local fea-tures such as Haar, histograms of oriented gradients (HOG), andscale-invariant feature transform for recognition, whereas thesefeatures are still limited to deal with the rotation, illumination,and scale variations situations. A good feature of a traffic signis desired to be discriminative and robust. In this paper, a novelColor Global and Local Oriented Edge Magnitude Pattern (ColorGlobal LOEMP) is proposed. The Color Global LOEMP is aframework that is able to effectively combine color, global spatialstructure, global direction structure, and local shape informationand balance the two concerns of distinctiveness and robustness.The contributions of this paper are as follows: 1) color angularpatterns are proposed to provide the color distinguishing infor-mation; 2) a context frame is established to provide global spatialinformation, due to the fact that the context frame is establishedby the shape of the traffic sign, thus allowing the cells to be alignedwell with the inside part of the traffic sign even when rotation andscale variations occur; and 3) a LOEMP is proposed to representeach cell. In each cell, the distribution of the orientation patternsis described by the HOG feature, and then, each direction ofHOG is represented in detail by the occurrence of local binarypattern histogram in this direction. Experiments are performedto validate the effectiveness of the proposed approach with TSRsystems, and the experimental results are satisfying, even forimages containing traffic signs that have been rotated, damaged,altered in color, or undergone affine transformations or imagesthat were photographed under different weather or illuminationconditions.

Index Terms—Histogram of oriented gradient (HOG), localbinary pattern (LBP), rotation invariant, traffic sign recognition(TSR).

I. INTRODUCTION

A T PRESENT, intelligent transportation system technologyis developing at a very rapid pace. Traffic problems, such

as driving safety, city traffic congestion, and transportationefficiency, are expected to be alleviated through the application

Manuscript received February 25, 2013; revised July 17, 2013, October 8,2013, and December 14, 2013; accepted January 6, 2014. Date of publica-tion May 5, 2014; date of current version August 1, 2014. This work wassupported in part by the Specialized Research Fund for the Doctoral Programof Higher Education under Grants 20110009120003 and 20110009110001, bythe National Natural Science Foundation of China under Grants 61301186 and61271305, and by the School Foundation of Beijing Jiaotong University underGrants W11JB00460 and 2010JBZ010. The Associate Editor for this paper wasS. S. Nedevschi.

X. Yuan is with the School of Electronics and Information Engineering,Beijing Jiaotong University, Beijing 100044, China, and also with the ChineseAcademy of Surveying and Mapping, Beijing 100830, China (e-mail: [email protected]).

X. Hao, H. Chen, and X. Wei are with the School of Electronics andInformation Engineering, Beijing Jiaotong University, Beijing 100044, China.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TITS.2014.2298912

of information technology and the intelligent transportation ofvehicles. As an important subsystem in intelligent transporta-tion system technology, traffic sign recognition (TSR) systemsbased on computer vision have gradually become an importantresearch topic in the field of intelligent transportation systemtechnology [1]–[10].

Traffic sign images are generally obtained from outdoornatural scenes by means of cameras installed on vehicles, andthen, the images are input to a computer for processing. Dueto the many types of complicated factors present outdoors,outdoor environments are much more complex and challengingthan indoor systems. The main difficulties of TSR systems arehow to extract the robust descriptor with rich information inaccordance with various lighting conditions, shape rotations,affine transformations, dimension changes, and so on.

Through in-depth study of traffic sign data sets, some com-mon characteristics of traffic signs may be observed, such as thefollowing, for example.

1) Some traffic signs have the same local features but differ-ent background colors [see Fig. 1(a)].

2) Some traffic signs have the same local features and back-ground colors but different distributions of global shapes[see Fig. 1(b)].

3) Some traffic signs share the same components but havedifferent meanings [see Fig. 1(c)].

The proposed system consists of the following two stages:detection performed using maximally stable extremal regions(MSERs) and recognition performed by the novel Color Globaland Local Oriented Edge Magnitude Pattern (Color GlobalLOEMP) features, which are classified using a support vectormachine (SVM). To the best of our knowledge, this is the firstpaper that adopts local binary pattern (LBP)-based featuresfor TSR, which are more discriminative and robust in han-dling shape rotations, various illumination conditions, and scalechanges from traffic sign images than the existing systems. Theremainder of this paper is organized as follows. In Section II,we review previous work and describe our improvements.Section III presents the traffic sign detection algorithm, andSection IV details the Color Global LOEMP feature extraction.Recognition based on SVMs is presented in Section V. Theexperimental results are presented in Section VI. Finally, aconclusion is presented in Section VII.

II. RELATED WORK

In recent years, research with regard to TSR has grownrapidly due to the significant need for such systems in futurevehicles. The most common approach consists of two main

1524-9050 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

YUAN et al.: ROBUST TRAFFIC SIGN RECOGNITION BASED ON COLOR GLOBAL LOEMPs 1467

Fig. 1. Examples of traffic signs. (a) Traffic signs with the same local texture patterns, but different background colors. (b) Traffic signs with the same local texturepatterns and background colors but, different distributions of global geometrics. (c) Traffic signs that share the same components but, have different meanings.

stages: detection and recognition. The detection stage identifiesthe regions of interest and is performed mostly using colorsegmentation, followed by some form of shape recognition. Thedetected candidates are then either identified or rejected duringthe recognition stage. The features used in the recognitionstage are the inner components of traffic signs, Haar, and HOGfeatures. Classifiers such as SVM [1]–[3], neural networks [4],and fuzzy regression tree frameworks [5] were reported onrecent papers.

In the detection stage, the majority of the systems makeuse of color information as a method for segmenting theimage. The performance of color-based traffic sign detec-tion is often reduced in scenes with strong illumination,poor lighting, or adverse weather conditions. Color models,such as hue–saturation–value (HSV), YUV, Y ′CBCR, andCIECAM97, have been used in an attempt to overcome issues.For example, Gao et al. [6] proposed a TSR system based on theextraction of the red and blue color regions in the CIECAM97color model. References [1] and [2] extracted color informationfor traffic sign detection in the HSV color model. Reference[4] detected potential road signs by means of the distributionof red pixels within the image in the Y ′CBCR color model. Incontrast, there are several approaches that ignore color informa-tion entirely and instead use only shape information from grayscale images. For example, Loy [7] proposed a system that usedlocal radial symmetry to highlight the points of interest in eachimage and detect octagonal, square, and triangular traffic signs.Recently, Greenhalgh and Mirmehdi [3] have proposed a trafficsign detection algorithm using a novel application of MSERs;the authors validated the efficiency of the MSERs method.

In the recognition stage, the majority of the systems makeuse of the inner region. For example, Fleyeh and Davami[8] extracted the binary inner component of traffic signs forrecognition. They processed the size and rotation normalizationbefore recognition in order to reduce the effects caused byshape rotation, affine transformation, and dimension changes.Then, they used the principal component analysis algorithm todetermine the most effective eigenvectors of the traffic signs.Escalera et al. [9] indicated that a sign is the sum of a colorborder, an achromatic (either white and/or black) inner com-ponent, and a shape. They proposed a method that computescolor energy, chromatic energy, gradient energy, and distanceenergy. They also proposed two techniques for determining the

minimum as traffic sign regions in the energy function, namely,simulated annealing and genetic algorithms. Maldonado-Bascón et al. also proposed a TSR system using gray imagesof inner regions [1], [2], in which the gray images were nor-malized and the contrasts were stretched and equalized beforerecognition to reduce the effect of illumination variations.

Some recent systems have made use of HOG, Haar, andscale-invariant feature transform (SIFT) features for TSR. Forexample, Ruta et al. [5] extracted Haar and HOG features fromtraffic sign images. Greenhalgh and Mirmehdi [3] proposeda TSR algorithm using HOG features. Takaki et al. [10] andIhara et al. [11] proposed a TSR method based on keypointclassification by SIFT. In their system, two different featuresubspaces are constructed from gradient and general images.Detected keypoints are then projected onto both subspaces,and SIFT is a local descriptor, which remains unchangedfor images of different scales and small rotation angles.Abdel-Hakim and Farag [12] proposed a traffic sign detectionand recognition technique by augmenting SIFT with the ad-dition of new features related to the color of the keypoints.In [13], Yuan et al. proposed a context-aware SIFT-basedalgorithm for TSR. Furthermore, a method for computing thesimilarity between two images is proposed in their paper, whichfocuses on the distribution of the matching points, rather thanusing the traditional SIFT approach of selecting the templatewith the maximum number of matching points as the finalresult. However, some issues still remain when illuminationand rotation variations occur. For example, the performance ofinner-region-based TSR is often reduced in scenes with variousillumination conditions, rotation, and scale variations and thathave undergone affine transformations. HOG [14] is very effec-tive in capturing gradients, long known to be crucial for visionand robust to appearance and illumination changes. However,images are clearly more than just gradients, and traditionalHOG is still limited to performing the rotation variations. SIFTis proposed to solve these problems, which is robust to variousillumination conditions, shape rotations, and scale changes.However, the dimension of the SIFT feature is dependent onthe number of detected keypoints, and the number of keypointsis always different among the images. Due to the fact that thedimensions of the features are different, it is difficult to designa suitable classifier for classification based on the SIFT feature.In this paper, an LBP-based feature, which is robust to shape

1468 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014

Fig. 2. Examples in the process of traffic sign detection. (a) Original images. (b) Normalized red/blue images. (c) MSERs (each level of the MSER is painted adifferent color). (d) Borders of the MSERs. (e) Detection results.

rotations, various illumination conditions, and scale changesfrom traffic sign images, is proposed. Furthermore, the LBP-based feature is more suitable for combining with commonclassifiers.

Ojala et al. first proposed the concept of LBP [15], whichmay be converted to a rotational invariant version for applica-tion in texture classification. Various extensions of LBP, such asLBP variance with global matching [16], dominant LBPs [17],completed LBPs [18], and joint distribution of local patternswith Gaussian mixtures [19], have been proposed for rotationalinvariant texture classification.

To the best of our knowledge, there have been no publicreport of using LBP for TSR; it is due to the fact that thefollowing issues remain in traditional LBP.

1) LBP is unable to provide color information.2) LBP only focuses on local textures while ignoring the

distribution of global shapes.3) The rotational invariant version of LBP proposed in [15],

named LBP riu2, has a very small size, and such a small

size of this feature cannot effectively represent a complextraffic sign image.

In this paper, a novel traffic sign descriptor is proposed,known as Color Global LOEMP, which is robust to illuminationconditions, scale, and rotation variations and balances the twoconcerns of distinctiveness and robustness. The main contribu-tions of this paper are as follows.

1) The proposed color angular feature is able to exploit thediscriminative color information derived from the spati-ochromatic texture patterns of different spectral channelsin a local region, which can respect the abundant colorinformation of traffic signs.

2) A novel context frame is established to provide globalspatial information. The context frame is established bythe shape of the traffic sign, thus allowing the cells to bealigned well with the inside part of the traffic sign, evenwhen rotation and scale variations occur.

3) A novel descriptor known as LOEMP is extracted fromeach cell, which is robust to lighting conditions and

YUAN et al.: ROBUST TRAFFIC SIGN RECOGNITION BASED ON COLOR GLOBAL LOEMPs 1469

Fig. 3. Flow of the proposed TSR system.

rotation variations. We apply the concept of calculatingboth the HOG-based structure, to describe the distributionof the holistic orientation of the local shape for eachorientation, and the LBP-based structure, to describe thedistribution of the local shape for each orientation.

Comparative experiments were performed to test the effec-tiveness of the proposed Color Global LOEMP. The publictraffic sign data sets used were the Spanish traffic sign set [20],the German Traffic Sign Recognition Benchmark (GTSRB)data set [21], and an image data set captured from a movingvehicle on cluttered China highways. The experimental resultsshow that the proposed Color Global LOEMP feature is able toyield excellent performance when applied to challenging trafficsign images.

III. TRAFFIC SIGN DETECTION

This paper uses the MSERs method and shape informationto extract traffic signs. First, the candidate regions of the trafficsigns are detected as MSERs [see Fig. 2(c)], each MSER level ispainted a different color, which are regions that maintain theirshapes when the image is thresholded at several levels. Then,the border of each MSER is extracted [see Fig. 2(d)]. Finally,elliptical, triangular, quadrilateral, and octagonal regions arelocated as the candidate regions for further recognition [seeFig. 2(e)]. Examples of this traffic sign detection process areillustrated in Fig. 2, and each step is presented in detail asfollows.

Greenhalgh and Mirmehdi [3] proposed a traffic sign de-tection algorithm using a novel application of MSERs, andthey proved that the MSERs were robust to both variations inlighting and contrast. In this paper, MSERs are adopted fordetecting the traffic sign candidate regions. For each pixel of theoriginal image, values are found for the ratio of the blue channelto the sum of all channels and the ratio of the red channel to thesum of all channels. The greater of these two values is used asthe pixel value of the normalized red/blue image, i.e.,

Ω = max

(R

R+G+B,

B

R+G+B

). (1)

MSERs are found for the image [see Fig. 2(b)]. Each imageis binarized at a number of different threshold levels, and theconnected components at each level are found. The connectedcomponents that maintain their shape through several thresholdlevels are selected as MSERs [see Fig. 2(c)].

An efficient ellipse detection method [22] is adopted to locatethe ellipses from the MSERs’ borders, and the general regularpolygon detector method proposed by Loy [7] is adopted todetect the triangular, quadrilateral, and octagonal regions.

It should be noted that detected traffic sign candidate regionscontain a lot of noise blobs; the recognition system is used tojudge whether a candidate region is a traffic sign. Examples ofextracted traffic signs are shown in Fig. 2(e).

IV. COLOR GLOBAL LOEMP FEATURE EXTRACTION

The flow of the proposed TSR system is illustrated in Fig. 3.First, the input color image is divided into three color compo-nents, and the color angle patterns are computed as chromaticinformation. Then, a context frame is used to divide the trafficsign region into several cells, in order to describe the globalspatial structure of each color angle pattern. After that, theLOEMP is extracted from each cell, and the LOEMPs extractedfrom all the color angle patterns and cells are combined as thefinal descriptor. Finally, an SVM is used as the classifier.

A. Global Feature Extraction

1) Color Angle Patterns: For the purpose of extracting thediscriminative patterns contained among the different spectralbands, the ratio of the pixels between a pair of the spectral-band images is calculated (see Fig. 4), using a method pro-posed by Choi et al. [23]. This directional information may beuseful for extracting discriminative color angular patterns forclassification.

The ratio of the pixel values between the spectral bands isdefined as

πi,j =vj

vi + γ, for i < j, i = 1, . . . ,K (2)

1470 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014

Fig. 4. Illustration of extracting the color angular patterns from pixel Cobtained from three color bands.

Fig. 5. Traffic sign detection and establishing the context frame.

where vi and vj are the elements of color vector c associatedwith the ith and jth spectral bands of the color image, respec-tively. Note that γ is a small-valued constant used to avoida zero-valued input in the denominator term. The color anglebetween the ith and jth spectral bands is computed as

θ(i,j) = tan−1(π(i,j)) (3)

where the values of θ(i,j) fall between 0◦ and 90◦. Note that,as shown in Fig. 4, θ represents the value of angle computedbetween the axis (corresponding to the ith spectral band) andthe reference line, which is formed by projecting C onto theplane associated with the ith and jth spectral bands.

2) Global Spatial Structure: In this paper, a novel contextframe is established, which is robust to image rotation and scalevariations, to provide the information of a global spatial struc-ture. In order to establish a more robust context frame, a frameof overlapping cells is built with polar coordinates. As shown inFigs. 5 and 6, the proposed context frame divides the images intoM ×N cells in polar coordinates [Fig. 4(a2)–(c2) shows the

Fig. 6. Context frame model. θ0 is the initial angle of the context framemodel.

examples of 2 × 4 cells, and Fig. 6 shows the example of 3 ×8 cells], where M is the number of the cells divided on the radialcoordinate, and N is the number of the cells divided on theangular coordinate. The proposed implementation is not exactlya logarithmic polar coordinate, since the radial increment is 0.It should be noted that the cells overlap each other on both theradial and angular coordinates in this paper, the overlappingrate on the radial coordinate named Lo and the overlapping rateon the angular coordinate named Ao. The initial θ0 is used tobuild the context frame, which is equal to the horizontal anglebetween one border of the quadrilateral, the octagonal, or thetriangular and the x-axis [see Figs. 5(a2) and (c2) and 6]. Thecontext frame is established by the shape of the traffic sign, thusallowing the cells to be aligned well with the inside part of thetraffic sign, even when rotation and scale variations occur.

B. Local Feature Extraction

Ojala et al. [15] proposed the rotation invariant LBP in2002. In order to build the rotation invariant features possessingdistinctiveness for each cell, the authors propose applyingthe concept of calculating both the HOG-based structure, todescribe the distribution of the holistic orientations, and theLBP-based structure, to describe the distribution of the localshape for each orientation. Then, the sequence of the binsof the holistic HOG is adjusted, and all histograms originatefrom its principal orientation. Based on the result of this step,all identical traffic sign images may be considered to be onthe same rotation. Finally, the LBP codes of each orientationare integrated based on the distribution of the holistic edgeinformation.

The image gradient is computed in one cell, and the gradientorientation of each pixel is evenly discretized across 0◦–180◦.Then, a HOG is formed from the gradient orientations of theimage. As shown in Fig. 7, the HOG has K bins covering the180◦ range of orientations. Each sample added to the histogramis weighted by its gradient magnitude. The maximal bin of theHOG is assigned as the principal orientation θmain. Then, all thebins of the histogram are shifted until the principal orientationshifts to the first position.

YUAN et al.: ROBUST TRAFFIC SIGN RECOGNITION BASED ON COLOR GLOBAL LOEMPs 1471

Fig. 7. Processes of building the LOEMP.

Supposing one cell is X × Y , after computing the imagegradient and discretizing the orientation for each pixel, theentire image is represented by building a histogram as

H(θk) =

X∑x=1

Y∑y=1

m(x, y)f(θk, θp), k ∈ [1,K] (4)

where θp is the quantified orientation of each pixel P , K isthe number of bins of the histogram, m(x, y) is the gradientmagnitude of pixel P , and f is defined as

f(a1, a2) =

{1 if a1 = a2 (5a)0 otherwise, (5b)

The second step is to compute the LBP at each pixel, whichis the same as the traditional LBP.

Finally, we build the occurrence histogram of LBP for eachdirection θk (see Fig. 7). This procedure is applied to the accu-mulated gradient magnitudes and across different directions tobuild the LOEMP features. A LOEMP feature is calculated foreach discretized direction θk, i.e.,

LOEMPθk =M∑x=1

N∑y=1

f(θk, θp)LBPθkP,R, k ∈ [1,K]

where

LBPθiP,R =

P∑j=1

f(θi, θp)S(mj −mc)2j−1

where θp is the quantified orientation of each pixel P ; Kis the number of bins of the histogram; mc and mj are thegradient magnitudes of the central and surrounding pixels c andj, respectively; and S(·) is the difference of the two gradientmagnitudes, i.e.,

S(x) =

{1 x ≥ 0 (6a)0 x < 0. (6b)

The final feature is determined as

GLOEMP={LOEMPθ1 ,LOEMPθ2 , . . . ,LOEMPθK}.

C. Properties of the Color Global LOEMP Feature

The Color Global LOEMP feature possesses the followingthree properties.

1) Color angle patterns that are able to encode the discrimi-native features derived from the spatiochromatic patternsof different spectral channels within a certain local re-gion. This enables it to contain richer information thanthe other LBP-based features.

2) A framework that is able to effectively combine color,global spatial structures, global direction structures, andlocal shape information. This enables it to contain richerimage information.

3) A global-level rotation compensation method, whichshifts the principal orientation of HOG to the first posi-tion, making Color Global LOEMP robust to rotations.

The first two properties allow the features to convey richimage information, and the third one allows the algorithm tobe robust to exterior variations.

V. RECOGNITION BASED ON SVMS

Recognition is implemented by SVMs multiclass classifica-tion with a linear SVM as presented in [25] as follows, and thelibrary LibLinear [26] is used in our system.

The recognition stage input is a vector of Color GlobalLOEMPs. To search for the decision region, all feature vectorsof a specific class are grouped together against all vectorscorresponding to the rest of the classes (including noisy objectshere), following the one-versus-all classification algorithm, sothat the system can recognize every sign.

1472 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014

To optimize the performance of the linear SVM classifier,an appropriate value for the cost of a regularization parameterC has to be selected. A cross correlation of the training set isperformed, and the value of C that produces the highest cross-correlation accuracy is used. In this paper, C is set to 1.2.

VI. EXPERIMENTS

In order to evaluate the effectiveness of the proposed method,a series of comparative experiments were performed usingseveral traffic sign data sets. The experiments included twocomponents, namely, traffic sign detection and classification.

A. Databases

We illustrate the effectiveness of the detection module bypresenting experiments on two traffic sign data sets: the Spanishtraffic sign set and the authors’ data set. The effectiveness of ourrecognition module is illustrated by presenting experiments ontwo traffic sign data sets: the GTSRB data set and the authors’data set.

1) Spanish Traffic Sign Set [20]: Many sequences on dif-ferent routes and under different lighting conditions werecaptured. Each sequence included thousands of images.With the aim of analyzing the most problematic situations,313 images selected from thousands of 800 × 600 pixel imagesextracted several sets in [20]. The images presented differentdetection and recognition problems, such as low illumination,rainy conditions, array of signs, similar background color, andocclusions.

2) GTSRB Data Set [21]: The GTSRB data set was createdfrom approximately 10 h of video that was recorded whiledriving on different road types in Germany during the day-time. The sequences were recorded in March, October, andNovember. The testing set contains 12 630 traffic sign imagesof the 43 classes, and the training set contains 39 209 trainingimages.

3) Authors’ Data Set: The authors collected a data set bycapturing images on different roads and under different lightingconditions. The camera images have a resolution of 1024 ×768 pixels. Each sequence included several thousand frames,among which more than 5000 frames were analyzed. Visibilitystatus included occluded, blurred, shadowed, and visible. Inorder to evaluate the effectiveness of the recognition module,the traffic signs were divided into two sets, known as testingset and training set. The testing set contains 4540 actual trafficsigns and the training set contains 4605 actual traffic signs in41 classes; the training images were captured on different routeswith the test images.

It is important to note that all the aforementioned datasets are unbalanced, and the number of images representingdifferent classes varies. Examples of the test images in theaforementioned detection databases are shown in Fig. 8(a) and(b), and examples of the test images used in the recognitiondatabase are shown in Fig. 9. As shown in Figs. 8(a) and (b)and 9, these images varied in their different rotation angles,geometric deformation, occlusion, and shadows, according todifferent weather and light conditions.

Fig. 8. Examples used in the experiments for traffic sign detection.(a) Examples on the Spanish traffic sign set. (b) Examples on the authors’data set.

Fig. 9. Examples used in the experiments for traffic sign recognition.

B. Experiments for Traffic Sign Detection

All the elliptical, triangular, quadrilateral, and octagonalregions were detected from the borders of MSERs. The trafficsign regions in the two data sets were all manually labeled bythe authors; the sizes of the traffic signs varied between 15 × 15and 156 × 193 pixels. The manually labeled traffic sign regionswere used to evaluate the efficiency of the detection system.

The accuracy of the traffic sign detection was evaluated byobserving the outputs of both the detection and recognitionmodules. If the final result was identified as a traffic sign,then it was considered a “detection.” If the algorithm failedto detect a sign that was present in the test image, then itwas a “miss.” Finally, if the system detected a non-road-signobject and classified it as a traffic sign, then it was a “falsealarm.” Tables I and II summarize the results generated duringthe detection processing of the two data sets and include the

YUAN et al.: ROBUST TRAFFIC SIGN RECOGNITION BASED ON COLOR GLOBAL LOEMPs 1473

TABLE ITRAFFIC SIGNS DETECTION RESULTS ON THE

SPANISH TRAFFIC SIGNS DATA SET

TABLE IITRAFFIC SIGNS DETECTION RESULTS ON THE AUTHORS’ DATA SET

following information: 1) the total number of traffic signs thatappear in the test sequence; 2) the number of detections oftraffic signs that have been correctly detected by both thedetection and recognition modules; 3) the number of falsealarms in the output of the system; and 4) the number of misses.After the traffic sign detection step was completed, the trafficsign candidate regions were input into the recognition modulefor further classification.

C. Experiments for Traffic Sign Classification

Several comparative experiments were performed to validatethe effectiveness of the proposed approach in TSR systems.Accuracy rate is computed in each comparative experimentwith the given formula

Accuracy rate =nm

nt

where nm is the number of accurate classified images, and nt

is the number of the test images.1) Comparison With the LBP-Based Features: For

comparison purposes, the following comparative experimentsusing nine sorts of LBP-based features (LBPu2

P,R, LBP riu2P,R ,

Color + LBP riu2P,R , Global LBP riu2

P,R , Color + GlobalLBP riu2

P,R , LOEMP, Color + LOEMP, Global LOEMP, Color +Global LOEMP) were performed to confirm the effectivenessof the proposed features, where the parameters in LBP R andP were set to 2 and 8.

Additionally, in order to obtain the global spatial structure,the context frame was used to divide the whole image intoM ×N overlapping cells in polar coordinate, where M wasthe number of the cells divided on the radial coordinate, and Nwas the number of the cells divided on the angular coordinate.On the other hand, in order to build the LOEMP feature, theHOG had K bins covering the 180◦ range of orientations.The number of cells (M ×N) and the number of HOG bins

(K) are two parameters that impact the performance of theproposed method. Fig. 10(a) and (b) shows the recognition rateby varying the number of cells and HOG bins. As expected,a too large or a too small cell size results in a decreasedrecognition rate because of the loss of spatial information orsensitivity to local variations. A smaller size of HOG binsloses the discriminative information, and a larger one increasesthe computational cost. Considering the tradeoff between therecognition rate and the computational cost, in the followingexperiments, the traffic sign images were divided into 3 × 12cells in polar coordinates, where the overlapping rate on theboth the angular and radial coordinates (L0, A0) was set to 0.5.The number of HOG bins was set to 14.

The comparison features are presented as follows:

• LBPu28,2: Extraction of the feature of LBPu2

8,2 (“uniform”LBP) and computation of the occurrence histogram fromthe whole traffic sign region as the final descriptor forclassification. The length of the feature vector LBPu2

8,2

was 59.• LBP riu2

8,2 : Extraction of the feature of LBP riu28,2 (uniform

rotation invariant LBP) and computation of the occurrencehistogram from the whole traffic sign region as the finaldescriptor for classification, resulting in feature vectors oflength 10.

• Color + LBP riu28,2 : Extraction of the feature of LBP riu2

8,2

and computation of the occurrence histogram from eachcolor angle pattern of the whole traffic sign region. Afterthat, the combination of the occurrence histograms ofall the color angle patterns as the final descriptor forclassification. The length of the feature vector Color +LBP riu2

8,2 was 2 × 10 = 20. It should be noted that onlytwo color angle patterns (θRG and θBG) were used in thisexperiment.

• Global LBP riu28,2 : Division of the traffic sign region into

several cells by the context frame and then extraction ofthe feature of LBP and computation of the occurrencehistogram from each cell and each color angle pattern.Combination of the occurrence histograms of all the cellsas final descriptor for classification. The length of thefeature vector Global LBP riu2

8,2 was 360.• Color + Global LBP riu2

8,2 : Division of the traffic signregion into several cells by the context frame and thenextraction of the feature of LBP and computation of theoccurrence histogram from each cell and each color anglepattern. Combination of the occurrence histograms of allthe color angle patterns and cells as the final descriptor forclassification. The length of the feature vector was 720.

• LOEMP: Extraction of the feature of LOEMP and compu-tation of the occurrence histogram from the whole trafficsign region as the final descriptor for classification, result-ing in feature vectors of length 14 × 10 = 140.

• Color + LOEMP: Extraction of the feature of LOEMP andcomputation of the occurrence histogram from each colorangle pattern. Combination of the occurrence histogramsof all the color angle patterns as the final descriptor forclassification, resulting in feature vectors of length 140 ×2 = 280.

1474 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014

Fig. 10. TSR rates with different parameters. (a) TSR rates with different cell numbers. (b) TSR rates with different HOG bin numbers.

TABLE IIIEXPERIMENTAL RESULTS OF COMPARATIVE EXPERIMENTS

ON THE AUTHORS’ DATA SET

• Global LOEMP: Division of the traffic sign region intoseveral cells by the context frame and then extraction ofthe feature of LOEMP and computation of the occurrencehistogram from each cell. Combination of the occurrencehistograms of all the cells as the final descriptor forclassification. The length of the feature vector was 140 ×36 = 5040.

• Color + Global LOEMP: Division of the traffic sign re-gion into several cells by the context frame and then ex-traction of the feature of LOEMP and computation of theoccurrence histogram from each cell and each color anglepattern. Combination of the occurrence histograms of allthe color angle patterns and cells as the final descriptorfor classification. The length of the feature vector context-aware LBP riu2

8,2 + Color was 10 080.

Table III presents the recognition results of the comparisonexperiments for LBP-based features on the authors’ data set. Asshown in Table III, the proposed Color Global LOEMP attainsthe highest recognition rate for all LBP-based feature extractionmethods.

2) Comparison With Other Features: For comparison pur-poses, the following comparative experiments using 1) candi-date traffic sign regions (64 × 64 and 32 × 32 pixels) [1],[2], 2) two sorts of HOG (set1 and set2) [3], [5] and 3) colorhistograms were performed to confirm the effectiveness of theproposed Color Global LOEMP.

• Candidate traffic sign regions: The recognition stage in-puts were candidate traffic sign regions that were scaled toa size of 64 × 64 and 32 × 32 pixels in gray scale images.

TABLE IVEXPERIMENTAL RESULTS OF COMPARISON WITH OTHER

DESCRIPTORS ON THE GTSRB DATA SET

TABLE VEXPERIMENTAL RESULTS OF COMPARISON WITH OTHER

DESCRIPTORS ON THE AUTHORS’ DATA SET

• HOG descriptors: Based on the gradients of the colorimages, different weighted and normalized histogramswere calculated, first for the small nonoverlapping cellsof multiple pixels that cover the whole image (set 1) andthen for the larger overlapping blocks that integrate overmultiple cells (set 2). Two sets of features from differentlyconfigured HOG descriptors were used. To compute theHOG descriptors, all images were scaled to a size of 128 ×128 pixels. For sets 1 and 2, the sign of the gradientresponse was ignored. Sets 1 and 2 used cells the size of16 × 16 pixels, a block size of 4 × 4 cells, and anorientation resolution of 8.

• Color histograms: This set of features was provided tocomplement the gradient-based feature sets with colorinformation. It contains a global histogram of the huevalues in HSV color space, resulting in 256 features perimage.

The experimental results of the comparisons with other fea-tures are listed in Tables IV and V. The most ideal results aremarked in bold font. As shown in Tables IV and V, the proposedColor Global LOEMP attains the highest recognition rate for allfeature extraction methods.

Additionally, in Fig. 11, the confusion matrix shows therecognition rate of the 41 classes in the authors’ data set. The

YUAN et al.: ROBUST TRAFFIC SIGN RECOGNITION BASED ON COLOR GLOBAL LOEMPs 1475

Fig. 11. Confusion matrix obtained for a 41-class traffic sign problem.

values on the x-axis represent the individual traffic sign classes,and the values on the y-axis represent the predictions madeby the classifier. In Fig. 11, most of the confusions occurredbetween very similar images, for example, some triangularsigns.

3) Comparison With Other Systems: In [1], Maldonado-Bascón et al. adopted the inner region of the traffic signnormalized into 31 × 31 pixels as the descriptor, and the linearSVM was adopted as the classifier. Using their method, theaccuracy rate on the GTSRB data set was 85.7869%. In [13],Yuan. et al. proposed a context-aware SIFT-based algorithmfor TSR. Furthermore, a method for computing the similaritybetween two images is proposed in their paper, which focuseson the distribution of the matching points, rather than using thetraditional SIFT approach of selecting the template with themaximum number of matching points as the final result. Dueto the fact that the dimensions of the SIFT features are differentfrom each other, it is difficult to design a suitable classifier forclassification based on the SIFT feature; template matching isused in their system. Using the context-aware SIFT method, therecognition rate on the GTSRB data set was 79.2381%.

The results of GTSRB Competition Phase I were listed on[21]. The experiments made use of the inner region of signs,Haar, or HOG features for recognition. In the recognitionstage, classifiers such as convolution neural network (CNN),convolutional networks, and subspace analysis were proposedfor traffic signs recognition. The CNN-based method attainedthe highest accuracy rate for all comparison experiments. Inthis paper, a novel feature of a traffic sign was proposed; in therecognition stage of our system, the linear SVM was adopted asthe classifier. The best accuracy rate based on the linear SVM

TABLE VIPROCESSING TIME OF THE TSR SYSTEM FOR PER FRAME

classifier was 95.89% on the results list of GTSRB CompetitionPhase I.

The proposed approach with the accuracy rate of 97.2581%on the GTSRB data set was greater than all the aforementionedfeature extraction methods of the traffic recognition systems.

4) Processing Time: Running on a 2.93-GHz Intel Core DuoCPU E7500 central processing unit with MATLAB 7.0, wherethe frame dimensions were 1360 × 1024, and Just-In-TimeAccelerator was used to speed up our programs. The systemspeed was around 4 frames/s. The average processing time ofeach part is listed in Table VI.

D. Discussion

As shown in Table III, the recognition rates increased byabout 8% when the color information was combined, due tothe fact that some traffic signs have the same local featuresbut different background colors. Combining with the colorinformation was able to provide the richer image features thatare robust to illumination variations.

The recognition rates increased by about 23% when theglobal spatial structure information was combined, due to thefact that some traffic signs had the same local patterns andbackground colors but different distributions of global shapes.

1476 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 15, NO. 4, AUGUST 2014

The recognition rates increased by about 21% using LOEMPrather than LBP riu2. An inner region of the traffic sign wasformed by a lot of microstructures of lines; therefore, mostof the LBP riu2 values detected in traffic sign images werethe same; the important issue for TSR was how to describethe relationship of the orientations among the local features.Due to the fact that LOEMP characterizes image informationacross different directions, therefore, it contained richer im-age information than LBP riu2. Furthermore, because LOEMPcombines both the global direction structure and the localtexture features, LOEMP was more suitable to describe thetraffic signs.

The recognition rates increased by about 8% using LOEMPrather than LBPu2, due to the fact that the global-level rotationcompensation method was proposed in LOEMP, which shiftedthe principal orientation of HOG to the first position, makingLOEMP robust to rotation; in contrast with LOEMP, LBPu2

was liable to be affected by rotations.

VII. CONCLUSION

This paper has proposed a novel descriptor for a TSR system,known as the Color Global LOEMP. The Color Global LOEMPis a framework that is able to effectively combine color, globalspatial structures, global direction structures, and local shapeinformation. In addition, the proposed Color Global LOEMP isrobust to illumination conditions, scale, and rotation variations.In order to verify the effectiveness of the detection module, twotraffic sign data sets, i.e., the Spanish traffic sign set and theauthors’ data set, were tested. Furthermore, two traffic sign datasets, i.e., the GTSRB data set and the authors’ data set, weretested to validate the efficiency of the recognition module. Theimages were captured under different conditions of geometricdeformation, damage, weather, and lighting. Different imagefeatures, such as the HOG feature, color histogram features,and nine sorts of LBP-based features, were used for the purposeof comparison, and the experimental results demonstrated theeffectiveness of the proposed method.

REFERENCES

[1] S. Maldonado-Bascón, S. Lafuente-Arroyo, P. Gil-Jiménez, H. Gómez-Moreno, and F. López-Ferreras, “Road-sign detection and recognitionbased on support vector machines,” IEEE Trans. Intell. Transp. Syst.,vol. 8, no. 2, pp. 264–278, Jun. 2007.

[2] S. Maldonado-Bascón, J. Acevedo-Rodríguez, S. Lafuente-Arroyo, A.Fernndez-Caballero, and F. López-Ferreras, “An optimization on pic-togram identification for the road-sign recognition task using SVMs,”Comput. Vis. Image Understand., vol. 114, no. 3, pp. 373–383, Mar. 2010.

[3] J. Greenhalgh and M. Mirmehdi, “Real-time detection and recognitionof road traffic signs,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 4,pp. 1498–1506, Dec. 2012.

[4] M. S. Prieto and A. R. Allen, “Using self-organizing maps in the detec-tion and recognition of road signs,” Image Vis. Comput., vol. 27, no. 6,pp. 673–683, May 2009.

[5] A. Ruta, Y. Li, and X. H. Liu, “Robust class similarity measure fortraffic sign recognition,” IEEE Trans. Intell. Transp. Syst., vol. 11, no. 4,pp. 846–855, Dec. 2010.

[6] X. W. Gao, L. Podladchikova, D. Shaposhnikov, K. Hong, andN. Shevtsov, “Recognition of traffic signs based on their colour and shapefeatures extracted using human vision models,” J. Vis. Commun. ImageRepresent., vol. 17, no. 4, pp. 675–685, Aug. 2006.

[7] G. Loy, “Fast shape-based road sign detection for a driver assistancesystem,” in Proc. IEEE Int. Conf. Intell. Robots Syst., 2004, pp. 70–75.

[8] H. Fleyeh and E. Davami, “Eigen-based traffic sign recognition,” IETIntell. Transp. Syst., vol. 5, no. 3, pp. 190–196, Sep. 2011.

[9] A. de la Escalera, J. M. Armingol, J. M. Pastor, and F. J. Rodriguez, “Vi-sual sign information extraction and identification by deformable modelsfor intelligent vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 5, no. 2,pp. 57–68, Jun. 2004.

[10] M. Takaki and H. Fujiyoshi, “Traffic sign recognition using SIFT fea-tures,” IEEJ Trans. Electron., Inf. Syst., vol. 129, no. 5, pp. 824–831, 2009.

[11] A. Ihara, H. Fujiyoshi, M. Takaki, H. Kumon, and Y. Tamatsu, “Im-provement in the accuracy of matching by different feature subspaces intraffic sign recognition,” IEEJ Trans. Electron., Inf. Syst., vol. 129, no. 5,pp. 893–900, 2009.

[12] A. Abdel-Hakim and A. A. Farag, “CSIFT: A SIFT descriptor with colorinvariant characteristics,” in Proc. IEEE CVPR, 2006, pp. 1978–1983.

[13] X. Yuan, X. L. Hao, H. J. Chen, and X. Y. Wei, “Traffic sign recognitionbased on a context-aware scale-invariant feature transform approach,” J.Electron. Imaging, vol. 22, no. 4, p. 041105, Jul. 2013.

[14] N. Dalal and B. Triggs, “Histograms of oriented gradients for humandetection,” in Proc. IEEE CVPR, 2005, pp. 886–893.

[15] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale androtation invariant texture classification with local binary patterns,” IEEETrans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002.

[16] Z. Guo, L. Zhang, and D. Zhang, “Rotation invariant texture classificationusing LBP variance with global matching,” Pattern Recognit., vol. 43,no. 3, pp. 706–719, Mar. 2010.

[17] S. Liao, M. W. K. Law, and A. C. S. Chung, “Dominant local binarypatterns for texture classification,” IEEE Trans. Image Process., vol. 18,no. 5, pp. 1107–1118, May 2009.

[18] Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binarypattern operator for texture classification,” IEEE Trans. Image Process.,vol. 19, no. 6, pp. 1657–1663, Jun. 2010.

[19] H. Lategahn, S. Gross, T. Stehle, and T. Aach, “Texture classification bymodeling joint distributions of local patterns with Gaussian mixtures,”IEEE Trans. Image Process., vol. 19, no. 6, pp. 1548–1557, Jun. 2010.

[20] H. Gómez-Moreno, S. Maldonado-Bascón, P. Gil-Jiménez, andS. Lafuente-Arroyo, “Goal evaluation of segmentation algorithmsfor traffic sign recognition,” IEEE Trans. Intell. Transp. Syst., vol. 11,no. 4, pp. 917–930, Dec. 2010.

[21] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer:Benchmarking machine learning algorithms for traffic sign recognition,”Neural Netw., vol. 32, no. 8, pp. 323–332, Aug. 2011.

[22] Y. Xie and Q. Ji, “A new efficient ellipse detection method,” in Proc. 16thInt. Conf. Pattern Recognit., 2002, pp. 957–960.

[23] J. Y. Choi, Y. M. Ro, and K. N. Plataniotis, “Color local texture featuresfor color face recognition,” IEEE Trans. Image Process., vol. 21, no. 3,pp. 1366–1380, Mar. 2012.

[24] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recog-nition using shape contexts,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 24, no. 24, pp. 509–522, Apr. 2002.

[25] C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn.,vol. 20, no. 3, pp. 273–297, Sep. 1995.

[26] C. Chang and C. Lin, A Library for Support Vector Machines 2001.[Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm

[27] V. Perlibakas, “Distance measures for PCA-based face recognition,” Pat-tern Recognit. Lett., vol. 25, no. 6, pp. 711–724, Apr. 2004.

Xue Yuan received the B.S. degree from Northeast-ern University, Shenyang, China, and the M.S. andPh.D. degrees from Chiba University, Chiba, Japan,in 2004 and 2007, respectively.

In 2007 she joined the Intelligent Systems Lab-oratory SECOM Company Ltd., Tokyo, Japan, as aResearcher. Since 2010 she has been with the Schoolof Electronics and Information Engineering, Bei-jing Jiaotong University, Beijing, China. She is alsocurrently with the Chinese Academy of Surveyingand Mapping, Beijing. Her research interests include

image processing and pattern recognition.

YUAN et al.: ROBUST TRAFFIC SIGN RECOGNITION BASED ON COLOR GLOBAL LOEMPs 1477

Xiaoli Hao received the B.E., M.E., and Ph.D.degrees from Beijing Jiaotong University, Beijing,China, in 1992, 1995, and 2010, respectively.

In 1995 she joined the School of Electronics andInformation Engineering, Beijing Jiaotong Univer-sity, where she has been an Associate Professorsince 2002. In 2006 she was a Visiting Scholarwith University of California, San Diego, CA, USA.Her research interests include optical imaging, signalprocessing, and machine vision.

Houjin Chen received the B.E. degree fromLanzhou Jiaotong University, Lanzhou, China, in1986, and the M.E. and Ph.D. degrees from BeijingJiaotong University, Beijing, China, in 1989 and2003, respectively.

In 1989 he joined the School of Electronics andInformation Engineering, Beijing Jiaotong Univer-sity, where he became a Professor in 2000 and iscurrently the Dean of the School of Electronics andInformation Engineering. In 1997 he was a VisitingScholar with Rice University, Houston, TX, USA,

and in 2000 he was a Visiting Scholar with the University of Texas at Austin,Austin, TX. His research interests include signal and information processing,image processing, and the simulation and modeling of biological systems.

Xueye Wei received the B.S. and M.S. degrees fromTianjin University, Tianjin, China, in 1985 and 1988,respectively, and the Ph.D. degree from Beijing In-stitute of Technology, Beijing, China, in 1994.

He is a Professor of electronics and informationengineering, Beijing Jiaotong University, Beijing.His research interests are in the theory of automaticcontrol.