wavelet-based rotational invariant roughness features for...

13
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002 825 Wavelet-Based Rotational Invariant Roughness Features for Texture Classification and Segmentation Dimitrios Charalampidis, Member, IEEE, and Takis Kasparis, Member, IEEE Abstract—In this paper, we introduce a rotational invariant feature set for texture segmentation and classification, based on an extension of fractal dimension (FD) features. The FD extracts roughness information from images considering all available scales at once. In this work, a single scale is considered at a time so that textures with scale-dependent properties are satisfactorily char- acterized. Single-scale features are combined with multiple-scale features for a more complete textural representation. Wavelets are employed for the computation of single- and multiple-scale roughness features because of their ability to extract information at different resolutions. Features are extracted in multiple direc- tions using directional wavelets, and the feature vector is finally transformed to a rotational invariant feature vector that retains the texture directional information. An iterative -means scheme is used for segmentation, and a simplified form of a Bayesian classifier is used for classification. The use of the roughness feature set results in high-quality segmentation performance. Furthermore, it is shown that the roughness feature set exhibits a higher classification rate than other feature vectors presented in this work. The feature set retains the important properties of FD-based features, namely insensitivity to absolute illumination and contrast. Index Terms—Fractals, -means, segmentation, texture, wavelets. I. INTRODUCTION T EXTURED image analysis is a topic investigated by re- searchers in the last few decades. The goal of the tech- niques presented in the literature is to duplicate the ability of the human brain to understand textural characteristics in an efficient way. Some of the applications that demonstrate the importance of texture analysis are found in medicine, remote sensing [1], [2], and industry. Textures are usually characterized by features. Features that have been used in the literature include Gabor transforms [3], [4], wavelet-based features [12], and features ex- tracted from images modeled as Markov random fields [5], [14]. Several attempts have been made to characterize texture by its roughness using fractal dimension (FD) [6]–[8], [15], which is relatively insensitive to illumination and contrast variations. In traditional FD analysis, it is assumed that natural textures exhibit similar roughness over a large number of scales. This as- sumption is not valid for many textures. An approach to expand Manuscript received October 8, 2001; revised April 29, 2002. The associate editor coordinating the review of this manuscript and approving it for publica- tion was Dr. Aleksandra Mojsilovic ´. D. Charalampidis is with the Department of Electrical Engineering, College of Engineering, University of New Orleans, New Orleans, LA 70148 USA (e-mail: [email protected]). T. Kasparis is with the School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, FL 32816 USA (e-mail: [email protected]). Publisher Item Identifier 10.1109/TIP.2002.801117. the FD concept has been proposed in [10], where the total scale range is divided into subranges and fractal-based measures are extracted for each subrange. In this paper, we extend the idea of multiscale roughness by extracting features considering only a single scale at a time. Sometimes, a texture consists of relatively large objects (circles, squares, lines), thus, roughness is only a result of abrupt changes or transitions located at the object edges. On the other hand, the roughness for statistical textures can be estimated using mul- tiple scales at a time. We use a combination of the two feature types to obtain a robust feature vector (FV) that describes both aforementioned cases. One of the several methods used for com- putation of fractal and multifractal features requires the use of wavelets [1]. In this paper, wavelets taken from the exponen- tial wavelet family are used for the computation of roughness features. An important texture characteristic is directionality. Without directional information, it may be impossible to distinguish tex- tures that have no difference otherwise. On the other hand, an often-necessary feature property that is usually overlooked in the literature is rotation invariance. It is important to be able to classify together similar textures independent of the rota- tion angle. In this paper, roughness features are computed in different directions in an effort to simulate the human visual system, which perceives roughness in different directions. The visual effect of directional roughness depends highly on the rel- ative textural energy in different directions. Thus, we weight the roughness features computed in a direction with the percentage of textural energy in the same direction. The weighted rough- ness features comprise a rotational variant FV, which is then transformed to a rotational invariant FV that preserves direc- tional information. The rotational invariant FV retains the im- portant properties of illumination and contrast invariance. Texture segmentation and classification are two similar tasks, yet they have different requirements. Segmentation requires a FV that effectively provides local information, and a boundary- preserving clustering algorithm. Classification requires a FV that can globally identify textures in a large database, and it is a supervised task. For segmentation purposes we use an it- erative -means algorithm that we developed in our previous work [15] and improve here. The clustering algorithm estimates the number of clusters (textures) in the image. Simulation re- sults illustrate that our FV facilitates differentiation of different textures and successful estimation of the number of uniform textural regions. For classification purposes, we use a simple Bayesian classifier. Comparisons between FVs show the supe- riority of our FV with respect to the percentage of correct clas- sification, and to insensitivity to rotation and skew. 1057-7149/02$17.00 © 2002 IEEE

Upload: others

Post on 29-Mar-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002 825

Wavelet-Based Rotational Invariant RoughnessFeatures for Texture Classification and Segmentation

Dimitrios Charalampidis, Member, IEEE,and Takis Kasparis, Member, IEEE

Abstract—In this paper, we introduce a rotational invariantfeature set for texture segmentation and classification, based onan extension of fractal dimension (FD) features. The FD extractsroughness information from images considering all available scalesat once. In this work, a single scale is considered at a time so thattextures with scale-dependent properties are satisfactorily char-acterized. Single-scale features are combined with multiple-scalefeatures for a more complete textural representation. Waveletsare employed for the computation of single- and multiple-scaleroughness features because of their ability to extract informationat different resolutions. Features are extracted in multiple direc-tions using directional wavelets, and the feature vector is finallytransformed to a rotational invariant feature vector that retainsthe texture directional information. An iterative -means schemeis used for segmentation, and a simplified form of a Bayesianclassifier is used for classification. The use of the roughnessfeature set results in high-quality segmentation performance.Furthermore, it is shown that the roughness feature set exhibitsa higher classification rate than other feature vectors presentedin this work. The feature set retains the important properties ofFD-based features, namely insensitivity to absolute illuminationand contrast.

Index Terms—Fractals, -means, segmentation, texture,wavelets.

I. INTRODUCTION

T EXTURED image analysis is a topic investigated by re-searchers in the last few decades. The goal of the tech-

niques presented in the literature is to duplicate the ability of thehuman brain to understand textural characteristics in an efficientway. Some of the applications that demonstrate the importanceof texture analysis are found in medicine, remote sensing [1],[2], and industry. Textures are usually characterized by features.Features that have been used in the literature include Gabortransforms [3], [4], wavelet-based features [12], and features ex-tracted from images modeled as Markov random fields [5], [14].Several attempts have been made to characterize texture by itsroughness using fractal dimension (FD) [6]–[8], [15], which isrelatively insensitive to illumination and contrast variations.

In traditional FD analysis, it is assumed that natural texturesexhibit similar roughness over a large number of scales. This as-sumption is not valid for many textures. An approach to expand

Manuscript received October 8, 2001; revised April 29, 2002. The associateeditor coordinating the review of this manuscript and approving it for publica-tion was Dr. Aleksandra Mojsilovic´.

D. Charalampidis is with the Department of Electrical Engineering, Collegeof Engineering, University of New Orleans, New Orleans, LA 70148 USA(e-mail: [email protected]).

T. Kasparis is with the School of Electrical Engineering and ComputerScience, University of Central Florida, Orlando, FL 32816 USA (e-mail:[email protected]).

Publisher Item Identifier 10.1109/TIP.2002.801117.

the FD concept has been proposed in [10], where the total scalerange is divided into subranges and fractal-based measures areextracted for each subrange.

In this paper, we extend the idea of multiscale roughness byextracting features considering only a single scale at a time.Sometimes, a texture consists of relatively large objects (circles,squares, lines), thus, roughness is only a result of abrupt changesor transitions located at the object edges. On the other hand, theroughness for statistical textures can be estimated using mul-tiple scales at a time. We use a combination of the two featuretypes to obtain a robust feature vector (FV) that describes bothaforementioned cases. One of the several methods used for com-putation of fractal and multifractal features requires the use ofwavelets [1]. In this paper, wavelets taken from the exponen-tial wavelet family are used for the computation of roughnessfeatures.

An important texture characteristic is directionality. Withoutdirectional information, it may be impossible to distinguish tex-tures that have no difference otherwise. On the other hand, anoften-necessary feature property that is usually overlooked inthe literature is rotation invariance. It is important to be ableto classify together similar textures independent of the rota-tion angle. In this paper, roughness features are computed indifferent directions in an effort to simulate the human visualsystem, which perceives roughness in different directions. Thevisual effect of directional roughness depends highly on the rel-ative textural energy in different directions. Thus, we weight theroughness features computed in a direction with the percentageof textural energy in the same direction. The weighted rough-ness features comprise a rotational variant FV, which is thentransformed to a rotational invariant FV that preserves direc-tional information. The rotational invariant FV retains the im-portant properties of illumination and contrast invariance.

Texture segmentation and classification are two similar tasks,yet they have different requirements. Segmentation requires aFV that effectively provides local information, and a boundary-preserving clustering algorithm. Classification requires a FVthat can globally identify textures in a large database, and itis a supervised task. For segmentation purposes we use an it-erative -means algorithm that we developed in our previouswork [15] and improve here. The clustering algorithm estimatesthe number of clusters (textures) in the image. Simulation re-sults illustrate that our FV facilitates differentiation of differenttextures and successful estimation of the number of uniformtextural regions. For classification purposes, we use a simpleBayesian classifier. Comparisons between FVs show the supe-riority of our FV with respect to the percentage of correct clas-sification, and to insensitivity to rotation and skew.

1057-7149/02$17.00 © 2002 IEEE

Page 2: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

826 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002

This paper is organized as follows. In Section II, the rough-ness features are introduced. Sections III and IV describe thetexture segmentation and classification methods, respectively.In Section V, segmentation and classification results are pre-sented, and Section VI concludes with some closing remarks.

II. FEATURE SET

In this section, we introduce the roughness features. First, wedefine the features for one-dimensional (1-D) signals and thenwe expand the definition for two-dimensional (2-D) signals.

A. Fractals and the Hurst Parameter in 1-D

Scale invariance analysis is a framework for developing sta-tistical tools that account for all available scales at once. It isa property respected by systems whose large and small scalesare related by a scale-changing operation involving only thescale ratio. Thus, these systems do not have a characteristicscale. Multifractals have been proven to be useful in the anal-ysis of complex systems [2]. In multifractal analysis, we seek apower–law relation between a measure extracted from the signal

and the scale parameter under consideration. The measureis traditionally , and the power–lawrelation is

(1)

where notates arithmetic average. Practically, is com-puted for each exponentas the slope of the line that best fitsthe points . The Hurstparameter is defined as and is related to the FD of anN-D signal by -FD.

The assumption that textures exhibit similar roughness overa large number of scales is not valid for many textures. Kaplan[10] expanded the idea of self-similarity to obtain multiscaleHurst parameters considering only two scales at a time. TheHurst parameters are computed by measuring the scaling be-havior at consecutive dyadic scalesand where .The 1-D multiscale Hurst parameters are

(2)

The signal derivative is a measure of signal sharpness. Thesignal difference used in traditional multifractal analysis

is the convolution output of thesignal and the -scale filter whose impulse response is

, where is the Diracfunction. Thus, is the derivative of an -point movingaverage. The filter can be substituted by an exponen-tial wavelet which is the derivative of a Gaussiansmoothing function [9] that possesses better frequency lo-calization. Consecutively, the traditionally used measure

is substituted by .The impulse response and the one-exponential wavelet

are shown in Fig. 1(a). The-point moving averageand the Gaussian smoothing function are shown in Fig. 1(b).

Fig. 1. Impulse responses of (a) thes-scale wavelet and thes-scale differencefilter and (b) thes-scale Gaussian smoothing filter and thes-scale movingaverage filter.

Fig. 2. Transition frequency example. (a) Signalf (t) with high-frequencytransitions, (b) signalf (t) with low-frequency transitions, (c) signalf (t) =f (t) + f (t) with high- and low-frequency transitions, (d) magnitude of thesignal filtered by a large-scale wavelet, and (e) magnitude of the signal filteredby a small-scale wavelet.

The -exponential wavelet is defined as theth derivative ofa Gaussian smoothing function

(3)

B. Single-Scale Roughness Features in 1-D

In this section, we extend the idea of multiscale roughness byextracting features considering only one scale at a time. Some-times, a texture consists of relatively large objects, so that rough-ness is a result of abrupt changes or transitions [9], and is notrelated to scale-invariant statistical texture properties. The vi-sual effect of the transitions depends on two characteristics: thetransition frequency (TF) and the transition length (TL).

The TF determines how often a transition appears. In Fig. 2(a)and (b), respectively, a high-TF signal and a low-TF signal

are presented. Signals may have different TFs superim-posed. Fig. 2(c) shows signal which is a superposition of

and . In order to isolate TFs around , a filter withcentral frequency around is required. In other words, consid-ering a specific scale allows identification of transitions, which

Page 3: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

CHARALAMPIDIS AND KASPARIS: WAVELET-BASED ROTATIONAL INVARIANT ROUGHNESS FEATURES 827

Fig. 3. Fast and slow transition examples. (a) Signalf (t) with afast transition, (b) signalf (t) with a slow transition, (c) magnitudeof the signal f (t) filtered by a wavelet, (d) magnitude of the signalf (t) filtered by a wavelet, (e)max jW (�; s) � f (�)j for" = 4, (f) max jW (�; s) � f (�)j for " = 4, and (g)max jW (�; s)�f (�)j for " = 2(e)max jW (�; s)�f (�)j for " = 2.

are detectable at this scale. Wavelets are suitable for this taskdue to their ability to decompose a signal into different scalecomponents. An example of signal decomposition into differentTFs is shown in Fig. 2(d) and (e). Fig. 2(d) shows that when thesignal is filtered by a large-scale wavelet , the func-tion peaks at the transitions of the low-TF com-ponents. Fig. 2(e) shows that when the signal is filtered by asmall-scale wavelet , the function peaksat the transitions of the high-TF components.

After the signal is decomposed into different TFs, transitionsare characterized by their TL. The functionpeaks at transition points and these peaks are narrower forsharper transitions. Fig. 3(a) shows a slow transition ,and Fig. 3(b) shows a sharp transition . Fig. 3(c) and(d) show that the peak [Fig. 3(c)] is widerthan the peak [Fig. 3(d)]. Thus, slowertransitions result in wider peaks. Considerthe operator that denotes the maximum ofthe argument in the interval , and the function

. The effect of applying theoperator to the function is

the broadening of the peaks in . The more isincreased, the broader the peaks become. This effect is shownin Fig. 3(e) and (g), where isillustrated for and , respectively, and in Fig. 3(f)and (h), where is illustratedfor and , respectively. From Fig. 3(e)–(h),we notice that if a peak is already wide [Fig. 3(c)], thenthe rate of broadening with respect tois not as high asif the peak were narrow [Fig. 3(d)]. Consider the measure

, which is a measure of thearea below the broadened peak (and, consecutively, the TL)and the peak’s amplitude. In order to restrict the measure’sdependence only on the TL, we approximate the relationbetween the measure and the parameterwith a power lawrelation similar to fractal analysis

(4)

where is the roughness feature at scale. The featureevaluates the measure’s rateof change with respect to. In other words, is an averagemeasure of the TLs in a signal: if consists ofnarrow peaks (short TL),increases significantly with and, as a result, is large.If consists of wide peaks (long TL),

does not increase con-siderably with and according to (4), is small. Featureis independent of the signal’s or peak’s amplitude since it isthe same even if is substituted by ( is a constant).We should mention that varying parameteris not equivalentto varying the scale.

C. Roughness Features in 2-D

In this section, we expand the single-scale roughness ideafrom 1-D to 2-D signals. The goal is to compute measures inmultiple directions similar to 1-D, using directional wavelets.First, we introduce thedirectional roughness features. Then, weintroduce the directionalpercentage of energyfeatures and theweighted roughnessfeatures. Finally, we propose a transforma-tion of the rotation variant weighted roughness features, which,together with rotational invariant Hurst features, forms a rota-tional invariant FV.

A 2-D Gaussian smoothing function at scaleis defined as

(5)

We define two wavelets that are the partial derivatives of thesmoothing function along and [9] (directions 0and 90 , respectively)

(6)

The subscript indicates the wavelet direction angle. Since dif-ferentiation and convolution are linear operators

and

(7)

Page 4: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

828 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002

Fig. 4. Examples of lines that best fit(log "; loghmax jW T (u; v; s)ji ) (s = 2; N = 9; � = 0 ; " = 1; 2; 3) for two textures, at differenttexture locations.

Therefore, and arethe gradient components of in directions 0and 90 , respectively, or the filtered versions of usingfilters and , respectively. Computationof the filtered signal in an arbitrary direction requires fil-tering of by the wavelet

. The convolutionis the -direction gradient component ofalong , and can be computed as a

linear combination of the 0and 90 components

(8)

In other words, is steerable [17], therefore,is computed for different angles, from

the 0 and 90 components using (8), which saves significantlyin computation time.

Similarly, we can consider the second partial derivatives ofthe smoothing function

(9)

The two subscripts indicate that theis the second partial derivative wavelet along direc-tions and . One can easily show that the convo-

lution between andcan be computed using

the components in (9)

(10)

We define the following two wavelet transforms of a functionat scale and direction :

(first derivative wavelet)

(second derivative wavelet). (11)

We introduce thedirectional roughness features similarlyto (4)

(12)

where indicates spatial arithmetic average in anwindow, and , (first and second derivative, respectively).The directional roughness features are computed as the slope ofthe line that best fits

Fig. 4 illustrates examples of such lines for two different tex-tures, at different texture locations, for , , ,and . Fig. 4 shows that lines extracted from dif-ferent locations of the same texture are almost parallel to eachother. This result verifies the robustness of the roughness fea-

Page 5: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

CHARALAMPIDIS AND KASPARIS: WAVELET-BASED ROTATIONAL INVARIANT ROUGHNESS FEATURES 829

tures (slopes of the lines). The distance between lines corre-sponding to the same texture is caused due to contrast differ-ences at various texture locations.

So far, the wavelets presented in this work are continuousfunctions. However, we are dealing with discrete images; there-fore, the wavelet implementation in (12) has to be performed ona discrete lattice. This is achieved by settingand as integerinstead of continuous variables. This implementation is highlyaccurate if scale is large enough to avoid aliasing, and also ifthe size of the discrete lattice on which the wavelets are imple-mented is sufficiently large. The aliasing effect for , whichis the smallest scale is insignificant. Moreover, the wavelets areimplemented on a discrete lattice with size at least six times thescale .

The visual effect of roughness depends highly on the relativetexture energy between different directions. For instance, evenif a texture is rougher in direction than direction , but haslower energy in direction than direction , direction maynot appear as rough as direction. For that purpose, we weightthe roughness features that are computed in each direction withthe percentage of textural energy existing in the correspondingdirection. The energy computed in directionusing an -scalewavelet is defined as

(13)

The total energy at scaleis

(14)

We introduce thepercentage of energyfeature computed in di-rection and at scale as

(15)

The percentage of energy is insensitive to the absoluteimage illumination since energy is computed using exponentialwavelets where the dc component is removed. It is also insensi-tive to contrast changes, since such an action will multiplyboth

and by a constant multiplicative term.We define theweighted roughnessfeatures as the roughness

features weighted with the percentage of energy that corre-sponds to the same scale and direction

(16)

Next, we propose a set of transformations on the rotationalvariant features to form a rotational invariant FV thatretains most of the directional information. We name the rota-tional invariantvector ,where thesuperscript indicates thenumber of features. This FV consists of two subvectors, namely

and .Thesubvector consists offeatures thatdonotholddirectional information.Thesefeatures

are theaverageof the weighted roughness features withrespect to direction

(17)

where is thetotalnumberofdirections.Furthermore, in thesub-vector we include a nondirectional version of Hurst fea-tures that we define as

(18)

The features in (18) are computed considering only two consecu-tivescalesat timeas itwassuggested in [10].These features relateconsecutive scales and they are especially suited for textures thatarecharacterizedbyscale invariance forshort scale ranges.

The second subvector contains features that hold thedirectional information. These features are computed for eachscale as themaximum absolute feature differencefor directionsthat differ by 45

(19)

and for directions that differ by 90

(20)

The subscript “1” in (20) indicates that these features are com-puted only for the case where the first derivatives were usedfor the computation of the features, since

, therefore .While the features and are measures of the di-

rectionality of a texture they are relatively rotational invariant,since they are computed as maximum feature differences, andthey are independent of the exact directions that these differ-ences appear. The larger the number of directionsused forfeature computation, the higher the insensitivity of the featureswith respect to rotation.

In this work, we propose the feature vector for( and )

(21)

where

The two components and of are useful in ex-plaining the information contained in the nondirectional and di-rectional features, respectively.

Fig. 5 presents some of the features in computed for threetextures that have been rotated from 0to 360 . The numberof directions considered is . The features that belong to

(such as ) do not vary with respect to the texturerotation angle. The features that belong to (such as ,

, and ) vary, but this variation is not significant.We also introduce the following FV that consists of ten rota-

tional variant features to obtain some information about the fea-ture performance prior to applying the transformations of (17),(19), and (20)

(22)

Page 6: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

830 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002

Fig. 5. Features from theFFF feature vector, evaluated for three differenttextures when the images are rotated from 0to 360 .

As a reminder, the feature vector is the one suggested inthis work.

III. T EXTURE SEGMENTATION TECHNIQUE

In this paper, we improve an iterative-means algorithm thatwe developed in our previous work [15]. The overall improvedalgorithm is presented here for completeness. The algorithm se-lects the cluster number based on a modified version of the vari-ance ratio criterion (VRC) index introduced by Calinskiet al.[13]

(23)

whereBCSSis the “between clusters sum of squares” andWCSSis the “within cluster sum of squares,” is the total numberof pixels, and is the number of clusters. The “within” and“between” cluster sums of squares are, respectively, defined as

(24)

(25)

(26)

is the “total sum of squares.” In (24) and (26), is the valueof the th element of the th FV, is the correspondingcluster center, and is the corresponding overall FV center.The number of clusters is incremented starting at . Theestimated number of clusters is the one for which indexVRCis maximized.

Fig. 6. Examples that illustrate the significance of theVRCindex.

Some insight about the significance of theVRCindex is givenin Fig. 6, where two-cluster examples are presented. Essentially,the WCSSis the total distance of the feature vectors from thecenter of their associated cluster and it indicates the clusters’extent. In Fig. 6, the distance of each vector from the corre-sponding center is indicated with a line. TheTSSis the total dis-tance of all vectors from the overall center, and it is an indicativemeasure of cluster separability. In Fig. 6(a) and (b) theWCSSisidentical [one can notice that both clusters 1 and 2 of Fig. 6(a)are identical to the ones in Fig. 6(b)]. On the other hand, theTSSfor Fig. 6(b) is smaller (the two clusters are closer to the overallcenter). Therefore, theVRCindex is smaller as indicated by (23)and (25), which signifies that the separation of the two clustersin Fig. 6(a) is larger than in Fig. 6(b) with respect to the clusterextent. The example presented in Fig. 6(c) is a downscaled ver-sion of the example in Fig. 6(a). TheWCSSandTSSin Fig. 6(c)are smaller than the ones in Fig. 6(a) by the same ratio. Thus,theVRCis the same in both examples.

Generally, the feature values at the boundaries are not rep-resentative of the cluster in which they belong. The modifiedindex is

(27)

where and are calculated as in (23) and (25),respectively, by omitting pixels that are closer than 14 pixels tothe estimated boundaries.

In this paper, we propose an extra modification for a more ac-curate number of clusters estimation. Instead of simply lookingfor maximization of , we choose the following criterionthat we callcombined estimation criterion (CEC).

a) If the index, for clusters, is smaller than 98%of the index, for clusters, the CEC is notsatisfied.

b) If the index, for clusters, is larger than 102% ofthe index for clusters, or if , the CECis satisfied.

c) If the index, for clusters, is smaller than 102%but larger than 98% of the index for clusters,the CEC is satisfiedonly if TSSfor clusters is smallerthan 70% ofTSSfor clusters.

Page 7: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

CHARALAMPIDIS AND KASPARIS: WAVELET-BASED ROTATIONAL INVARIANT ROUGHNESS FEATURES 831

TABLE IITERATIVE K-MEANS CLUSTERING ALGORITHM

The motivation for the extra modification is the need for asecond measure in order to resolve the problem of having

close 2 for different number of clusters. Thismeasure is the drop (30%) ofTSSfrom to clusters.

The iterative -means-based clustering algorithm [15] is de-signed to produce finer segmentation at the boundaries betweentextures. Step 2c, shown in the following, is added to our al-gorithm in [15]. This step prevents the creation of erroneousstrip-shaped clusters at the boundaries. The improved algorithmis summarized in Table I.

Fig. 7 presents a segmentation example, where the effects ofthe basic algorithmic steps are explained. Fig. 7(a) depicts anideal segmentation result, where different grayscale regions cor-respond to different textures (the original image is not of impor-tance here). At this point, it is assumed that the number of clus-ters or textures examined is the correct one (three).

Fig. 7(b) presents the initial segmentation result using theoriginal -means. In order to have a robust segmentation, the

Fig. 7. Segmentation example that illustrates the effect of different iterativeK-means’ algorithmic steps.

features have been smoothed by a 2626 moving-averageprior to clustering. The result in Fig. 7(b) shows that the bound-aries are smoothed out. Furthermore, there is an erroneousstrip-shaped region at the boundary between clusters 1 and 3,marked as cluster 2. The erroneous strip is created becausethe feature extraction and smoothing windows around theboundaries include mixed information from two textures at atime. Also, some small spurious regions have been created.

Fig. 7(c) shows the segmentation result after merging small tolarge regions. Merging is achieved by a sliding widow that asso-ciates its central pixel to the majority cluster in the window. Forinstance, the square window shown in Fig. 7(b) will associatethe central pixel to cluster 1, since the majority of the pixels inthe window are labeled as cluster 1. The erroneous strip-shapedregion and the spurious regions have been removed, but theboundaries are further smoothed out.

Fig. 7(d) presents the same image after determining the am-biguous region. Consider a square window centered at a des-ignated pixel. This pixel is defined as ambiguous if any other

Page 8: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

832 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002

Fig. 8. Segmentation examples. The images namedor are the original images, while the images namedseg are the corresponding segmented images.

pixel in this window has a different assigned classification. Es-sentially, ambiguous regions are the ones close to the clusterboundaries. The labels “A” and “U” indicate the pixel status,i.e., they suggest if the pixel should be reexamined (label “A”)or not (label “U”). This region marked “A” will be reexaminedbut the features will be smoothed with a smaller moving-averagewindow.

Fig. 7(e) shows the segmentation result at a later stage. Here,some part of the “A” region has already been associated to theexisting clusters. One can notice that the details at the bound-aries are better preserved. On the other hand, since the moving-average window is small at this stage of the algorithm, erro-neous strip-shaped regions may appear again at cluster bound-aries. This problem can be avoided if spatial proximity is taken

into consideration. For instance, the pixel at the center of thesquare window shown in Fig. 7(e) will be associated either tocluster 1 or to cluster 2, since cluster 3 is spatially far away (nopixel marked cluster 3 exists in this window). Fig. 7(f) showsthe final segmentation, which is close to the ideal, in terms ofboundary details.

The above algorithm is applied more than once for differentselections of the initial centers to increase the probability of ap-proaching the global minimum. The criterion for determiningwhich clustering is better is the minimization of the square error,which is equivalent to minimizing the quantityWCSSas it isdefined in (23). The next step would be to reapply the wholeprocess by increasing the number of clusters by one. It is impor-tant to mention that in the cases where the smoothing or merging

Page 9: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

CHARALAMPIDIS AND KASPARIS: WAVELET-BASED ROTATIONAL INVARIANT ROUGHNESS FEATURES 833

Fig. 8. (Continued.) Segmentation examples. The images namedor are the original images, while the images namedseg are the corresponding segmentedimages.

windows exceed beyond the image limits, the image beyond theedges is taken to be equal to the mirrored version of the originalimage.

IV. TEXTURE CLASSIFICATION

A simplified form of the Bayes distance classifier suggestedin [10] is employed for classification. Part of the FVs (trainingFVs) is used to train the classifier. This is achieved by computingthe mean vector and the standard deviation vectorof thetraining FVs for each class

(28)

where are training FVs extracted from classtextures, andis the total number of vectors . In the test phase, a FV

is associated to class if the simplified Bayes distance com-puted for class

(29)

is minimum with respect to the distance computed for any otherclass. In (28), , , and are the th elements of the ,

, and vectors, respectively. Also, is the total number offeatures (FV elements).

Page 10: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

834 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002

Fig. 9. Segmentation example. (a) Original image that has been partially subjected to illumination reduction and (b) the corresponding segmentation result.(c) Original image that has been partially subjected to contrast reduction and (d) the corresponding segmentation result.

V. EXPERIMENTAL RESULTS

In this section, we present segmentation and classificationresults. The segmentation results demonstrate the effective-ness of our FV in estimating the number of different ho-mogeneous textural regions in an image. The classificationresults verify the robustness of our FV by providing highpercentage of correct classification (PCC) for textures ob-tained from a large database. Furthermore, the classificationresults illustrate the feature insensitivity to rotation, as wellas skew.

A. Segmentation

The weighted roughness features are computed in windowsof size 9 9. Therefore, the parameter in (12)–(14) is equalto nine. Also, the parametertakes the values three, five, andseven. The suggested FV, , (21) is used. Feature extractionis followed by clustering using the iterative-means presentedin Section III. Thirty different mosaics are created consisting oftwo, three, four, and five textures. Some segmentation resultsare presented in Fig. 8. The images namedare the originalmosaics, while the images named are the correspondingsegmented images.

The number of clusters (uniform textural regions in theimage) is correctly estimated for29 out of 30 images, ifthe CEC presented in Section III is used. If the criterion ismaximization of the index, the number of clusters iscorrectly estimated for 26 out of 30 images. In both cases,the robustness of our features is evident. We do not presentthe results in terms of percentage of correct clustering, sincecertain segmentation results are subjective. For instance, it ispossible that portions of a texture possess a certain visual simi-larity to another texture. The examples presented in Fig. 8 arerepresentative of the good-quality segmentation performance,which is a result of two factors: feature robustness and thedetail-preserving clustering algorithm.

Imageor12 consists of two textures presented as strips withdifferent orientations. Imageor13 consists of three textures,where one of them is presented with two different directions. Inboth examples, similar textures are correctly clustered together,regardless of the rotation angle. It is also important to noticethat there are not erroneously created strip-shaped clusters atthe boundaries between textures for any of the segmentation ex-amples of Fig. 8. This is an effect due to step 2c of the iterative

-means algorithm.

Fig. 10. The 64� 64 samples from the images used for classification.

Fig. 9 presents segmentation results for imageor13 that hasbeen subjected to two transformations. The results verify thefact that the roughness features are insensitive to absolute illu-mination and absolute contrast. Illumination reduction has beenapplied to the lower right quarter of the image in Fig. 9(a) andcontrast reduction has been applied to the lower right quarter ofthe image in Fig. 9(c). The segmentation results are shown inFig. 9(b) and (d), respectively, and they are almost unaffectedby the transformations.

Page 11: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

CHARALAMPIDIS AND KASPARIS: WAVELET-BASED ROTATIONAL INVARIANT ROUGHNESS FEATURES 835

TABLE IICLASSIFICATION PERCENTAGES FORDIFFERENTFEATURE VECTORS

B. ClassificationFor the classification experiments, we use the same database

used in [10]. The database consists of 77 textures of size

512 512 that belong to 63 different classes. The textures areobtained from the Computer Vision Group at the Universityof Bonn via the World Wide Web [16] and from the Brodatz

Page 12: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

836 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002

Fig. 11. Images: (a) and (b) are original images and (c) and (d) are corresponding skewed images.

database [11]. Fig. 10 shows a 6464 texture sample fromeach texture. The number on the top of each sample indicatesthe class in which each sample belongs and it also correspondsto the class number in Table II.

One FV is extracted from each 6464 block for all given tex-ture classes. We use 32 image blocks from one image per textureclass for training and the rest of the blocks for testing. Thus, wehave a total of 2464 training and 2464 test data points. Trainingand testing are performed using the simplified Bayesian classi-fier presented in Section IV.

We evaluate seven FVs for texture classification:

1) is the proposed FV containing 15 rotational invariant(directional and nondirectional) roughness features;

2) is the FV containing eight rotational variant rough-ness features, and two multiscale Hurst features for scalesone and two;

3) is ten directional multiscale Hurst features [see(18)] for scales and two directions;

4) is the subvector of that contains the nondi-rectional information;

5) is the subvector of that contains the directionalinformation;

6) is 40 wavelet energy features;7) is 48 real Gabor energy features.

As a reminder, , , , and were defined in(21). We present classification results using wavelet energy fea-tures , using the same wavelets that we use to extract theroughness features, to illustrate the superiority of the roughnessfeatures over energy. The wavelet energy features are computedas in (13).

The classification accuracy is shown in Table II. Table IIshows that the proposed FV, , outperforms all other FVswith average PCC equal to 90.19%. The PCC for hasbeen verified with [10]: 85.79 in Table II and 85.69 in [10]. ThePCC for the Gabor-based FV introduced in [10], was reportedto be 89.57 for the same texture database, which is 0.6% worsethan . The disadvantages of the Gabor FV in [10] are thelarge number of Gabor features needed (48 features); they arerotational variant, and they are sensitive to absolute contrast. En-ergy features computed using real Gabor filters [3] werealso tested, and the PCC is also shown in Table II (86.08%).

The PCC for is 89%, which is 3.28% higher than thePCC when the ten multiscale Hurst features are used. We presentthe PCC results for in order to obtain some information

about the feature performance prior to applying the transforma-tions of (17), (19), and (20).

We also present PCC for the feature vectors and, which is 81.51% and 65.58%, respectively. This result

shows that directional information is important since the PCCincreases from 81.51% to 90.19% when the directional featuresare added to the nondirectional. From the results in the Table IIand from the corresponding textures samples, we can verify thattextures containing some form of directional information, suchas textures 3, 4, 15, 23, 24, 25, 34, 42, and 62, have high PCCusing the directional features alone, while the nondirectionaltextures such as 1, 2, 5, 10, 18, 19, 28, and 29, cannot becorrectly classified.

The shaded columns of Table II present the PCC for ,and when the images are randomly rotated between 0and 360, and also when the images are skewed. The pixelvalue for an image rotated by rads at coordinates, isset equal to the original image’s pixel value at coordinates

, , where is the integerpart of the argument and the origin is considered to be at theimage center. The pixel value for a skewed image at coordinates

, is set equal to the original image’s pixel value at coordi-nates , , where, again, is the integer partof the argument. In the case where a pixel in the transformed(rotated or skewed) image is mapped to more than one pixelin the original image, its value is set equal to the average ofthe corresponding pixels in the original image. The texturedimages used in the experiments had a natural appearance whensubjected to the above transformations. Some examples ofskewed images are shown in Fig. 11. The rotation effect isclearly shown in Fig. 9. In order to test the FV performancewithout using prior information, training is performed usingonly original and not transformed images.

The PCC for is virtually the same for the rotated im-ages (90.16%) as the PCC for the original images, whilefails (24.98%). This is expected since is not rotationalinvariant. The PCC of for the skewed images is 84.38%,which is about 6% lower than for the original images. Thishappens because some of the textures are significantly affectedwhen skewed. Still, the PCC is not affected as much as for

.It is important to mention that the Hurst FV is less

robust than , but it is more computationally efficient. Com-putation of the proposed FV requires 15 s for an image

Page 13: Wavelet-based rotational invariant roughness features for ...people.cecs.ucf.edu/kasparis/docs/Wavelet-based rotational invariant... · signal filtered by a large-scale wavelet, and

CHARALAMPIDIS AND KASPARIS: WAVELET-BASED ROTATIONAL INVARIANT ROUGHNESS FEATURES 837

of size 512 512 using a Pentium III, 1.5 GHz, whilehas insignificant time requirements (less than 1 s). On the otherhand, is more computationally efficient than , whichrequires 38 s.

VI. CONCLUSIONS

A new roughness feature set has been evaluated for texturesegmentation and classification. The feature set is an extensionof the multiscale Hurst feature set introduced in [10]. The rough-ness feature vector has shown to result in high-quality texturesegmentation, as well as increased percentage of correct classi-fication with respect to the other feature vectors that were testedin this work.

In several applications, feature insensitivity to differentimage transformations is not only a desired feature property butsometimes anecessaryfeature property. Our roughness featuresretain the important characteristics that fractal-based featurespossess, namely, insensitivity to absolute image intensity andinsensitivity to contrast. Furthermore, they are shown to beinsensitive to rotation, and the percentage of correct classifica-tion is relatively high when the images are subjected to skewwhile the Hurst features fail to achieve high percentage in bothaforementioned cases. Therefore, the roughness features seemto be an attractive choice for many texture related applications.

REFERENCES

[1] J. Arrault, A. Arneodo, A. Davis, and A. Marsak, “Wavelet based multi-fractal analysis of rough surfaces: Application to cloud models and satel-lite data,”Phys. Rev. Lett., vol. 79, no. 1, pp. 75–79, July 1997.

[2] A. Arneodo, N. Decoster, and S. G. Roux, “Intermittency, log-normalstatistics, and multifractal cascade process in high-resolution satteliteimages of cloud structure,”Phys. Rev. Lett., vol. 83, no. 6, pp.1255–1259, Aug. 1999.

[3] A. K. Jain and F. Farrokhnia, “Unsupervised texture segmentation usingGabor filters,”Pattern Recognit., vol. 24, pp. 1167–1185, 1991.

[4] B. S. Manjunath and W. Y. Ma, “Texture features for browsing and re-trieval o image data,”IEEE Trans. Pattern Anal. Machine Intell., vol.18, pp. 837–842, Aug. 1996.

[5] P. C. Chen and T. Pavlidis, “Segmentation of texture using correlation,”IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, pp. 64–69, Jan.1983.

[6] A. Pentland, “Fractal-based description of natural scenes,”IEEE Trans.Pattern Anal. Machine Intell., vol. PAMI-6, pp. 661–674, 1984.

[7] J. Garding, “Properties of fractal intensity surfaces,”Pattern Recognit.Lett., vol. 8, pp. 319–324, 1988.

[8] B. B. Chaudhuriand and N. Sarkar, “Texture segmentation using fractaldimension,” IEEE Trans. Pattern Anal. Machine Intell., vol. 17, pp.72–77, Jan. 1995.

[9] S. Mallat and W. L. Hwang, “Singularity detection and processing withwavelets,”IEEE Trans. Inform. Theory, vol. 38, pp. 617–643, Mar. 1992.

[10] L. M. Kaplan, “Extended fractal analysis for texture classification andsegmentation,”IEEE Trans. Image Processing, vol. 8, pp. 1572–1585,Nov. 1999.

[11] P. Brodatz, Texture: A Photographic Album for Artists and De-signers Dover, New York, 1966.

[12] M. Unser, “Texture classification and segmentation using waveletframes,”IEEE Trans. Image Processing, vol. 4, pp. 1549–1560, 1995.

[13] T. Calinski and J. Harabatz, “A dendrite method for cluster analysis,”Commun. Statist., pp. 1–26, 1974.

[14] G. R. Cross and A. K. Jain, “Markov random field texture models,”IEEETrans. Pattern Anal. Machine Intell., vol. PAMI-5, pp. 25–39, Jan. 1983.

[15] T. Kasparis, D. Charalampidis, M. Georgiopoulos, and J. Rolland, “Seg-mentation of textured images based on fractals and image filtering,”Pat-tern Recognit., vol. 34, no. 10, pp. 1963–1973, Oct. 2001.

[16] [Online]. Available: http://www-dbv.cs.uni-bonn.de/seg/[17] W. T. Freeman and E. H. Andelson, “The design and use of steerable

filters,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, pp. 891–906,Sept. 1991.

Dimitrios Charalampidis (S’99–M’02) receivedthe Diploma degree in electrical engineering andcomputer technology from the University of Patras,Patras, Greece, in 1996. He received the M.S. andPh.D. degrees in electrical engineering from theUniversity of Central Florida, Orlando, in 1998 and2001, respectively.

In 2001, he joined the Electrical EngineeringDepartment of the University of New Orleans, NewOrleans, LA, where he is presently an AssistantProfessor. His research interests include image

processing, digital signal processing, neural networks, pattern recognition, andapplications of signal processing to remote sensing. Currently, his researchemphasis is on image segmentation and classification and neural networkalgorithms.

Takis Kasparis (S’82–M’85) received the Diplomadegree in electrical engineering from the NationalTechnical University of Athens, Athens, Greece,in 1980 and the M.E.E.E. and Ph.D. degrees inelectrical engineering from the City College ofNew York in 1982 and 1988, respectively.

From 1985 to 1989, he was an Electronics Con-sultant. In 1989, he joined the Electrical EngineeringDepartment, University of Central Florida, Orlando,where he is presently an Associate Professor. Hisresearch interests include digital signal and image

processing, texture analysis, and computer vision. He has served on the steeringcommittees of several conferences and is currently an Associate Editor forPattern Recognition.

Dr. Kasparis is a member of the Technical Chamber of Greece.