research article feature and score fusion based multiple classifier selection for iris...

12
Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition Md. Rabiul Islam Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh Correspondence should be addressed to Md. Rabiul Islam; rabiul [email protected] Received 15 December 2013; Revised 29 May 2014; Accepted 19 June 2014; Published 10 July 2014 Academic Editor: Daoqiang Zhang Copyright © 2014 Md. Rabiul Islam. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, leſt iris based unimodal system, right iris based unimodal system, leſt-right iris feature fusion based multimodal system, and leſt-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. 1. Introduction Biometrics deals with identification of individuals based on their biological or behavioral characteristics which provides the significant component of automatic person identification technology based on a unique feature like face, iris, retina, speech, palmprint, hand geometry, signature, fingerprint, and so forth [1]. Iris recognition technology is the most reliable existing biometric systems available because of the unique feature of human iris. Iris has some unique features such as accuracy, uniqueness, high information content, stability, reliability, and real-time access capability compared with other biometric patterns [2, 3]. Such unique feature in the anatomical structure of the iris facilitates the differentiation among individuals. e human iris pattern is not changeable and is constant over person’s lifetime from one year of age until death [1, 4]. Iris is a thin circular diaphragm which is a part between the blackish pupil and the whitish sclera [5]. Because of this uniqueness and stability, iris recognition is a reliable person identification technique [6]. ough unimodal biometric system performs well in some of the cases, it involves a variety of problems like nonuniversality, susceptibility of spoofing, noises of sensed information, intraclass variations, and interclass similarities [7]. Multimodal biometric system (i.e., the combination of more than one unimodal biometric system) can solve some of the above-mentioned problems. Recently, scientists are attempting to combine multiple modalities for person identification which are referred to as multimodal biometrics. Multimodal biometric identification systems are capable of utilizing more than one physical, behavioral, or chemical characteristic for enrollment. Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. For combining the multimodal information, generally four different fusion strategies, that is, sensor level fusion, feature level fusion, score level fusion, and decision level fusion, have been used for various multimodal systems [8, 9]. In sensor level fusion, information is taken from multiple sensors that can be processed and integrated to generate a new vector. Extracted features are fused to produce a combined vector in feature level fusion. For score level fusion, scores are counted from different classifiers and combined to get the final result. Lastly, several classifiers output is combined to Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2014, Article ID 380585, 11 pages http://dx.doi.org/10.1155/2014/380585

Upload: others

Post on 22-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Research ArticleFeature and Score Fusion Based Multiple ClassifierSelection for Iris Recognition

Md Rabiul Islam

Department of Computer Science amp Engineering Rajshahi University of Engineering amp Technology Rajshahi 6204 Bangladesh

Correspondence should be addressed to Md Rabiul Islam rabiul cseyahoocom

Received 15 December 2013 Revised 29 May 2014 Accepted 19 June 2014 Published 10 July 2014

Academic Editor Daoqiang Zhang

Copyright copy 2014 Md Rabiul Islam This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method onMultiple Classifier Selection technique has been applied Four Discrete Hidden Markov Model classifiers output that is left irisbased unimodal system right iris based unimodal system left-right iris feature fusion based multimodal system and left-rightiris likelihood ratio score fusion based multimodal system is combined using voting method to achieve the final recognitionresult CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensionsExperimental results show the versatility of the proposed system of four different classifiers with various dimensions Finallyrecognition accuracy of the proposed systemhas been comparedwith existing119873 hamming distance score fusion approach proposedbyMa et al log-likelihood ratio score fusion approach proposed by Schmid et al and single level feature fusion approach proposedby Hollingsworth et al

1 Introduction

Biometrics deals with identification of individuals based ontheir biological or behavioral characteristics which providesthe significant component of automatic person identificationtechnology based on a unique feature like face iris retinaspeech palmprint hand geometry signature fingerprint andso forth [1] Iris recognition technology is the most reliableexisting biometric systems available because of the uniquefeature of human iris Iris has some unique features suchas accuracy uniqueness high information content stabilityreliability and real-time access capability compared withother biometric patterns [2 3] Such unique feature in theanatomical structure of the iris facilitates the differentiationamong individuals The human iris pattern is not changeableand is constant over personrsquos lifetime from one year of ageuntil death [1 4] Iris is a thin circular diaphragm which isa part between the blackish pupil and the whitish sclera [5]Because of this uniqueness and stability iris recognition is areliable person identification technique [6]

Though unimodal biometric system performs well insome of the cases it involves a variety of problems likenonuniversality susceptibility of spoofing noises of sensed

information intraclass variations and interclass similarities[7] Multimodal biometric system (ie the combinationof more than one unimodal biometric system) can solvesome of the above-mentioned problems Recently scientistsare attempting to combine multiple modalities for personidentificationwhich are referred to asmultimodal biometricsMultimodal biometric identification systems are capable ofutilizing more than one physical behavioral or chemicalcharacteristic for enrollment Multimodal biometric systemsconsolidate the evidence presented by multiple biometricsources and typically provide better recognition performancecompared to systems based on a single biometric modalityFor combining the multimodal information generally fourdifferent fusion strategies that is sensor level fusion featurelevel fusion score level fusion and decision level fusion havebeen used for various multimodal systems [8 9] In sensorlevel fusion information is taken from multiple sensors thatcan be processed and integrated to generate a new vectorExtracted features are fused to produce a combined vectorin feature level fusion For score level fusion scores arecounted from different classifiers and combined to get thefinal result Lastly several classifiers output is combined to

Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2014 Article ID 380585 11 pageshttpdxdoiorg1011552014380585

2 Computational Intelligence and Neuroscience

achieve the multimodal biometric system in decision fusionapproach which includes Multiple Classifier Selection (MCS)and Dynamic Classifier Selection (DCS)

In this paper a new hybrid fusion based pair of iris recog-nition approaches is proposed where three different fusionlevels such as feature fusion score fusion and decision fusionare used Discrete HiddenMarkovModel (DHMM) has beenused to classify the input iris pattern For decision fusionfour different classifiers output that is left iris based uni-modal DHMM classifier right iris based unimodal DHMMclassifier left-right iris feature fusion based multimodalDHMMclassifier and left-right iris log-likelihood ratio scorefusion based multimodal DHMM classifier is combined toachieve the final result The next sections of the paper dealwith related work and proposed fusion scheme automatediris segmentation and feature extraction techniques andexperimental results of each of the four different classifiersand their combined output with the comparison of existingmultimodal iris recognition system

2 Literature Review and Architecture ofthe Proposed System

A very little amount of work has been done in iris recognitionwhere multiple modalities of iris have been used Mostof the multimodal iris biometric work combines iris withother biometric techniques The largest cluster of papers inthis area deals with the combination of face and iris andanothermultibiometric system involves almost any combina-tion of iris and some other multimodalities like fingerprintpalmprint speech and so forth [11] Two different fusionstrategies that is feature and score fusion have been used inthose works Researchers used these fusion techniques withdifferent strategies in the areas of multimodal techniquesIn feature fusion person identification has performed bycombining iris and facial feature vector [12] Wang et al alsoused face and iris common feature vector for multimodalbiometric recognition [13 14] Rattani andTistarelli proposeda multiunit multimodal biometric system where face left irisand right iris image vector were used and fused these threedifferent features to enhance the performance [15] Iris andfingerprint multimodal feature fusion based cryptographickey generation system [16] and person identification system[17] were developed Various multimodal score level fusionschemes were also proposed by different researchers In[18] different algorithms were used to calculate the score ofiris and facial multimodalities and Support Vector Machine(SVM) was used to combine those scores User specific scorefusion approach of iris and face [19] iris and fingerprintbased multimodal score fusion [20] score fusion of iris andpalmprint multibiometric system [21] were also developed

Some of the fusion approaches were proposed for irisrecognition where also feature and score fusion methodswere used By using iris feature fusion method Vatsa et alcombined left and right iris image for multimodal biometricsystem [22] Hollingsworth et al [23] created a single averageimage from multiple frames from iris video which showedthat single level fusion is better thanmultigallery score fusion

method In score fusion method Ma et al [24] used threedifferent templates from an iris image and averaged theirscores to get the final result Minimum match score wasused instead of iris image averaging to collect the final scorefor iris biometric recognition by Krichen et al [25] Schmidet al developed log-likelihood ratio method to fuse the scorewhich can better perform than Hamming distance basedscore fusion [26]

Hollingsworth et al [27] proposed an approach of imageaveraging using single level feature from iris image to improvethe matching performanceThey compared their systemwiththree reimplemented score fusion approaches of Ma et al[24] Krichen et al [25] and Schmid et al [26]They reportedthat the proposed feature fusion approach performed betterthan Marsquos or Krichenrsquos score level fusion methods of 119873Hamming distance scores and also performed better thanSchmidrsquos log-likelihood method of score fusion

In this proposed work a combined approach of featurefusion and score fusion based pair of iris multibiometricsystems has developed which is shown in Figure 1 Featuresfrom left and right iris images are fused and log-likelihoodratio based score fusionmethod is applied to find the score ofiris recognition Finally to take the final recognition resultvoting method is applied to combine the output of MultipleClassifier Selection (MCS) such as the individual modality ofeach iris (ie left and right iris) feature fusion based modal-ity and log-likelihood ratio based modality Discrete HiddenMarkov Model (DHMM) has been used as a classifier andPrincipal Component Analysis (PCA) has been applied toreduce the dimensionality of the iris feature in different levelsof the overall proposed approach Various experiments havebeen performed on the proposed feature and decision fusionBased MCS approach with existing hamming distance scorefusion approach proposed by Ma et al [24] log-likelihoodratio score fusion approach proposed by Schmid et al [26]and feature fusion approach proposed by Hollingsworth et al[27]

3 Iris Segmentation and Feature Extraction

Since iris image preprocessing and feature extraction play avital role of the overall recognition performance standardiris localization segmentation normalization and featureencoding procedure have been applied in the proposedsystem

The first step is to isolate the actual iris region fromthe eye image For this an edge map is created using edgedetection algorithm with canny directional edge detectorFigure 2(b) shows the edge map after applying canny edgedetector To extract the original iris region two differentboundaries have to be predicted As a result a vertical edgemap has been created for detecting the outer boundary thatis irissclera boundary and to observe the inner boundarythat is pupiliris boundary a horizontal edge map hasbeen created Circular Hough transform has been applied todeduce the center coordinates and radius for the outer andinner boundaries of the iris [28 29] The results of usingcircular Hough transform are shown in Figure 2(c) After

Computational Intelligence and Neuroscience 3

Feature fusion of left and right

iris

DHMM classifier

DHMM classifier

DHMM classifier

Voting method

Reliability measurement of right iris

Reliability measurement of left iris

Likelihood ratio based score fusion

Iris feature encoding

and feature extraction

Iris feature encoding

and feature extraction

Iris recognition

Left eye image

PCA baseddimensionality

reduction

PCA based dimensionality

reduction

PCA based dimensionality

reduction

Right eye image

Figure 1 Block diagram of the proposed feature and decision fusion based Multiple Classifier Selection for iris recognition

(a) (b)

(c) (d)

Figure 2 (a) Eye image (b) edge map using canny edge detector (c) results of circular Hough transform and (d) removing the eyelids part

doing that we have to remove those parts from the iris whichare overlapped by eyelids Eyelids were isolated by first fittinga line to the upper and lower eyelid using the linear Houghtransform [30] A second horizontal line is then drawnwhichintersects with the first line at the iris edge that is closest tothe pupil This process is done for both the top and bottomeyelidsThe second horizontal line allowsmaximum isolationof eyelid regions The result of removing the eyelids region isshown in Figure 2(d)

In normalization iris region is transformed into a fixeddimension so that one can compare two different iris imageswith the same spatial location Dimension inconsistenciesof the iris region occur due to stretching of iris causedby pupil dilation for varying levels of light illuminationimage capturing distance angular deflection of camera andeye and so forth Normalization process removes all of

the above-mentioned difficulties to produce a fixed size irisvector Daugmanrsquos Rubber Sheet Model [31] has been used tonormalize the iris region In this process each point of the irisregion converted into a pair of polar coordinates (119903 120579) where119903 is on the interval between 0 and 1 and 120579 is the angle between0 and 2120587 which is shown in Figure 3 Problem can occur forrotation of the eye within the eye socket in iris image Forthis reason the center of the pupil has been considered as thereference point and a number of data points are selected alongeach radial line which is called the radial resolution In thisway we can get the fixed dimension of iris region which isshown in Figure 3(b)

For extracting the feature it is important to extract themost important information presented in the iris regionThere are different alternative techniques that can be used forfeature extraction which includes Gabor filtering techniques

4 Computational Intelligence and Neuroscience

(a) (b)

120579

r

120579

r

Figure 3 (a) Normalization process and (b) iris region with fixeddimension

2-DLog-

Gabor filtering

1

9600

Reduced iris

feature

1

550

1

550

120579

r

1 middot middot middot 96

PCA

Figure 4 Feature encoding and dimensionality reduction process

Zero-crossing 1D wavelet filters Log-Gabor filters and Haarwavelet In this work Log-Gabor filtering technique has beenapplied to extract the iris features effectively 9600 featurevalues have been taken from each iris region PrincipalComponent Analysis method has been used to reduce thedimension of the feature vector where 550 feature values havebeen taken Feature extraction and dimensionality reductionprocess are shown in Figure 4

4 Iris Recognition Using HMM Classifier

In training phase of the proposed iris recognition systemfor each iris 119896 DHMM (Discrete HMM) 120579119896 has been built[32] The model parameters (119860 119861 120579) have been estimated tooptimize the likelihood of the training set observation vectorfor the 119896th iris by using Baum-Welch algorithm The Baum-Welch reestimation formula has been considered as follows[33]

Π119894 = 1205741 (119894)

119886119894119895 =

sum119879minus1

119905=1120585119905 (119894 119895)

sum119879minus1

119905=1120574119905 (119894)

119887119895 () =

sum119879

119905=1(119904119905 119900119905=V119896)120574119905 (119895)

sum119879

119905=1120574119905 (119895)

(1)

where

120585119905 (119894 119895) =

120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

sum119873

119894=1sum119873

119895=1120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

120574119905 (119894) =

119873

sum

119895=1

120585119905 (119894 119895)

(2)

In the testing phase for each unknown iris to be recognizedthe processing shown in Figure 5 has been carried out Thisprocedure includes

Eye image

Iris segmentation

Iris feature extraction

Dimensionality reduction

Probability computation

Probability computation

Probability computation

Select maximum

Recognition result

DHMM for iris number 1

DHMM for

DHMM for iris number 2

Feature conditioning

1205791

1205792

120579k

1 le k le K

iris number k

P(O | 1205792)

P(O | 1205791)

P(O | 120579k)

klowast = arg max [P(O |120579k)]

Figure 5 Block diagram of DHMM based iris recognition system(modified) [10]

(i) measurement of the observation sequence 119874 =

1199001 1199002 119900119899 via a feature analysis of the iris pattern(ii) transforming the continuous values of 119874 into integer

values(iii) calculation of model likelihoods for all possible mod-

els 119875(119874 | 120579119896) 1 le 119896 le 119870(iv) declaration of the iris as 119896lowast person whose model

likelihood is the highestmdashthat is

119896lowast= argmax1le119896le119870

[119875 (119874 | 120579119896)] (3)

In this proposed work the probability computation step hasbeen performed using Baumrsquos Forward-Backward algorithm[33 34]

Medium sized iris dataset has been used for most ofthe recent iris recognition tasks However it is difficultto evaluate the performance accurately with this problemHowever more and more large-scale iris recognition systemsare deployed in real-world applications Many new problemsare met in classification and indexing of large-scale iris imagedatabases So scalability is another challenging issue in irisrecognition CASIA-IrisV4 iris database has been released topromote research on long-range and large-scale iris recogni-tion systems As a result to measure the performance of theoverall iris recognition system CASIA-IrisV4 iris database[35] has been used

CASIA-IrisV4 database was developed and released tothe international biometrics community and updated fromCASIA-IrisV1 to CASIA-IrisV3 since 2002 More than 3000users from 70 countries or regions have downloaded CASIA-Iris and much excellent work on iris recognition has beendone based on these iris image databases CASIA-IrisV4

Computational Intelligence and Neuroscience 5

contains 54601 iris images where 1800 are genuine and 1000are virtual subjects Iris images are collected under nearinfrared illumination or synthesized and represented as 8-bit gray level There are six data sets which were collectedat different times and four different methodologies wereused that is CASIA-Iris-Interval CASIA-Iris-Lamp CASIA-Iris-Distance and CASIA-Iris-Thousand Since CASIA-Iris-Interval is well suited for the detailed texture features ofiris images which are captured by close-up iris camerathis dataset has been used for the experimental work ofthe proposed system The most compelling feature of thisdatabase is to design a circular NIR LED array with suitableluminous flux for iris imaging

For measuring the accuracy of individual left iris andright iris based unimodal recognition system the criticalparamerter that is the number of hidden states of DHMMcan affect the performance of the system A tradeoff is madeto explore the optimum value of the number of hidden statesand comparison results with ReceiverOperatingCharacteris-tics (ROC) curve are shown in Figure 6 which represents theleft and right iris based unimodal recognition performancecombinedly

5 Feature Fusion Based HMM Classifier

Feature level fusion of iris recognition can significantlyimprove the performance of a multibiometric system besidesimproving population coverage deterring spoof attacksincreasing the degrees-of-freedom and reducing the failure-to-enroll rate Although the storage requirements processingtime and the computational demands of feature fusionbased system are much higher than unimodal system for irisrecognition [36] effective integration of left and right irisfeatures can remove or reduce most of the above-mentionedproblems Feature level fusion of left and right iris featuresis an important fusion strategy which can improve overallsystem performance of the proposed system In feature levelfusion sufficient information can exist compared with scorelevel fusion and decision level fusion As a result it canbe expected that feature level fusion can achieve greaterperformance over other fusion strategies of the proposedmultimodal iris recognition system

Feature level fusion can be found by simple concate-nation of the feature sets taken from left and right infor-mation source By concatenating two feature vectors 1198801015840 =1199061015840

1 1199061015840

2 119906

1015840

119898 and1198811015840 = V1015840

1 V10158402 V1015840

119899 a new feature vector

1198801198811015840= 1199061015840

1 1199061015840

2 119906

1015840

119898 V10158401 V10158402 V1015840

119899 119880119881 isin 119877

119898+119899 has beencreated The objective is to combine these two feature sets inorder to create a new feature vector 119880119881 that would betterrepresent the individual To combine left and right iris featurevectors the dimensionality of new feature vector is very largeAs a result dimensionality reduction technique is necessaryto reduce the searching domain of learned database Thefeature selection process chooses a minimal feature set of 119896where 119896 lt (119898+119899) that improves the performance on a trainedset of feature vectors The objective of this phase is to findout optimal subset of features from the complete feature setPrincipal Component Analysis [37] has been used to reduce

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 6 ROC curve of left iris and right iris based unimodalsystem

Feature fusion of left and right iris

DHMM classifier

Right iris feature

feature Left iris

Iris recognition

PCA based dimensionality

reduction

1

9600

1

19200

1

1100

1 19200

1

9600

middot middot middot

Figure 7 Feature fusion based multimodal iris recognition system

the dimensionality of the feature vector Figure 7 shows theprocess of left and right iris feature fusion based multimodaltechnique of the proposed system

In feature level fusion optimum value of the numberof hidden states of DHMM has been chosen and Figure 8shows the comparison results with ROC curve Figure 9shows the performance comparison among unimodal leftiris unimodal right iris and left-right iris feature fusionbased recognition Since the primary goal of left-right irisfeature fusion based multimodal iris recognition system is toachieve the performance which is equal to or better than theperformance of any left or right unimodal iris recognitionsystem When the noise level is high of right iris the leftiris unimodal system performs better than the right irisunimodality thus the left-right iris recognition performanceshould be at least as good as that of the right iris unimodalsystem When the noise level is high of left iris the rightiris recognition performance is better than the left one andthe integrated performance should be at least the same as orbetter than the performance of the right iris recognitionThesystem also works very well when left and right iris imagedo not contain noises Figure 9 shows the above-mentioned

6 Computational Intelligence and Neuroscience

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 8 Recognition performance of left-right iris feature fusionbased multimodal system among different number of hidden statesof DHMM

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 9 Performance comparison among unimodal left iris uni-modal right iris and left-right iris feature fusion based multimodalrecognition system

performance of the left-right feature fusion basedmultimodaliris recognition

6 Likelihood Ratio Score Fusion BasedHMM Classifier

In left-right iris feature fusion based method the left irisand right iris features from all modalities are combined intoone high dimensional vector But the method considers allmodalities with equal weight and it is the main disadvantageof the feature fusion basedmultimodalmethodThis problemcan be solved by using different weights according to differentnoise conditions of the left and right iris modalities Themethod of score fusion by assigning weights of eachmodalitycan be used for this purpose This also allows the dynamic

adjustment of the importance of each stream through theweights according to its estimated reliability

Left-right iris likelihood ratio based score fusion methodis a score fusion technique where the reliability of eachmodality ismeasured by using the output ofDHMMclassifierfor both the left and right iris features If one of themodalitiesbecomes corrupted by noise the othermodality filling the gapby making its weight and the recognition rate will increaseto use this score fusion based method This is also valid inthe case of a complete interruption of one stream In thiscase the corrupted modality should be close to maximumand the weight assigned to the missing stream close to zeroThis practically makes the system revert to single streamrecognition automatically This process is instantaneous andalso reversible that is if the missing stream is restored themodality would decrease and the weight would increase tothe level before the interruptionThe integrated weight whichdetermines the amount of contribution from each modalityin left-right iris score fusion based recognition system iscalculated from the relative reliability of the two modalitiesFigure 10 shows the process of using likelihood ratio scorefusion based pair of iris recognition systems

To calculate the score fusion result theDHMMoutputs ofindividual iris (ie left and right) recognition are combinedby a weighted sum rule to produce the final score fusionresult For a given left-right iris test datum of 119874119871 and 119874119877 therecognition utterance 119862lowast is given by [38]

119862lowast= argmax

119894

120574 log119875(119874119871120582119894

119871

)

+ (1 minus 120574) log119875(119874119877

120582119894

119877

)

(4)

where 120582119894119871and 120582119894

119877are the left iris and the right iris DHMMs

for the 119894th utterance class respectively and log119875(119874119871120582119894

119871) and

log119875(119874119877120582119894

119877) are their log-likelihood against the 119894th class

The weighting factor 120574 determines the contribution ofeach modality for the final decision From the two mostpopular integration approaches such as baseline reliabilityratio based integration and 119873-best recognition hypothesisreliability ratio based integration baseline reliability ratiobased integration has been used where the integration weightis calculated from the reliability of each individual modalityThe reliability of each modality can be calculated by the mostappropriate and best in performance as [39]

119878119898 =1

119873 minus 1

119873

sum

119894=1

(max119895

log119875( 119874120582119895) minus log119875(119874

120582119894)) (5)

which means the average difference between the maximumlog-likelihood and the other ones are used to determine thereliability of each modality119873 is the number of classes beingconsidered to measure the reliability of each modality 119898 isin

119871 119877Then the integration weight of left iris reliability measure

120574119871 can be calculated by [40]

120574119871 =119878119871

119878119871 + 119878119877

(6)

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

2 Computational Intelligence and Neuroscience

achieve the multimodal biometric system in decision fusionapproach which includes Multiple Classifier Selection (MCS)and Dynamic Classifier Selection (DCS)

In this paper a new hybrid fusion based pair of iris recog-nition approaches is proposed where three different fusionlevels such as feature fusion score fusion and decision fusionare used Discrete HiddenMarkovModel (DHMM) has beenused to classify the input iris pattern For decision fusionfour different classifiers output that is left iris based uni-modal DHMM classifier right iris based unimodal DHMMclassifier left-right iris feature fusion based multimodalDHMMclassifier and left-right iris log-likelihood ratio scorefusion based multimodal DHMM classifier is combined toachieve the final result The next sections of the paper dealwith related work and proposed fusion scheme automatediris segmentation and feature extraction techniques andexperimental results of each of the four different classifiersand their combined output with the comparison of existingmultimodal iris recognition system

2 Literature Review and Architecture ofthe Proposed System

A very little amount of work has been done in iris recognitionwhere multiple modalities of iris have been used Mostof the multimodal iris biometric work combines iris withother biometric techniques The largest cluster of papers inthis area deals with the combination of face and iris andanothermultibiometric system involves almost any combina-tion of iris and some other multimodalities like fingerprintpalmprint speech and so forth [11] Two different fusionstrategies that is feature and score fusion have been used inthose works Researchers used these fusion techniques withdifferent strategies in the areas of multimodal techniquesIn feature fusion person identification has performed bycombining iris and facial feature vector [12] Wang et al alsoused face and iris common feature vector for multimodalbiometric recognition [13 14] Rattani andTistarelli proposeda multiunit multimodal biometric system where face left irisand right iris image vector were used and fused these threedifferent features to enhance the performance [15] Iris andfingerprint multimodal feature fusion based cryptographickey generation system [16] and person identification system[17] were developed Various multimodal score level fusionschemes were also proposed by different researchers In[18] different algorithms were used to calculate the score ofiris and facial multimodalities and Support Vector Machine(SVM) was used to combine those scores User specific scorefusion approach of iris and face [19] iris and fingerprintbased multimodal score fusion [20] score fusion of iris andpalmprint multibiometric system [21] were also developed

Some of the fusion approaches were proposed for irisrecognition where also feature and score fusion methodswere used By using iris feature fusion method Vatsa et alcombined left and right iris image for multimodal biometricsystem [22] Hollingsworth et al [23] created a single averageimage from multiple frames from iris video which showedthat single level fusion is better thanmultigallery score fusion

method In score fusion method Ma et al [24] used threedifferent templates from an iris image and averaged theirscores to get the final result Minimum match score wasused instead of iris image averaging to collect the final scorefor iris biometric recognition by Krichen et al [25] Schmidet al developed log-likelihood ratio method to fuse the scorewhich can better perform than Hamming distance basedscore fusion [26]

Hollingsworth et al [27] proposed an approach of imageaveraging using single level feature from iris image to improvethe matching performanceThey compared their systemwiththree reimplemented score fusion approaches of Ma et al[24] Krichen et al [25] and Schmid et al [26]They reportedthat the proposed feature fusion approach performed betterthan Marsquos or Krichenrsquos score level fusion methods of 119873Hamming distance scores and also performed better thanSchmidrsquos log-likelihood method of score fusion

In this proposed work a combined approach of featurefusion and score fusion based pair of iris multibiometricsystems has developed which is shown in Figure 1 Featuresfrom left and right iris images are fused and log-likelihoodratio based score fusionmethod is applied to find the score ofiris recognition Finally to take the final recognition resultvoting method is applied to combine the output of MultipleClassifier Selection (MCS) such as the individual modality ofeach iris (ie left and right iris) feature fusion based modal-ity and log-likelihood ratio based modality Discrete HiddenMarkov Model (DHMM) has been used as a classifier andPrincipal Component Analysis (PCA) has been applied toreduce the dimensionality of the iris feature in different levelsof the overall proposed approach Various experiments havebeen performed on the proposed feature and decision fusionBased MCS approach with existing hamming distance scorefusion approach proposed by Ma et al [24] log-likelihoodratio score fusion approach proposed by Schmid et al [26]and feature fusion approach proposed by Hollingsworth et al[27]

3 Iris Segmentation and Feature Extraction

Since iris image preprocessing and feature extraction play avital role of the overall recognition performance standardiris localization segmentation normalization and featureencoding procedure have been applied in the proposedsystem

The first step is to isolate the actual iris region fromthe eye image For this an edge map is created using edgedetection algorithm with canny directional edge detectorFigure 2(b) shows the edge map after applying canny edgedetector To extract the original iris region two differentboundaries have to be predicted As a result a vertical edgemap has been created for detecting the outer boundary thatis irissclera boundary and to observe the inner boundarythat is pupiliris boundary a horizontal edge map hasbeen created Circular Hough transform has been applied todeduce the center coordinates and radius for the outer andinner boundaries of the iris [28 29] The results of usingcircular Hough transform are shown in Figure 2(c) After

Computational Intelligence and Neuroscience 3

Feature fusion of left and right

iris

DHMM classifier

DHMM classifier

DHMM classifier

Voting method

Reliability measurement of right iris

Reliability measurement of left iris

Likelihood ratio based score fusion

Iris feature encoding

and feature extraction

Iris feature encoding

and feature extraction

Iris recognition

Left eye image

PCA baseddimensionality

reduction

PCA based dimensionality

reduction

PCA based dimensionality

reduction

Right eye image

Figure 1 Block diagram of the proposed feature and decision fusion based Multiple Classifier Selection for iris recognition

(a) (b)

(c) (d)

Figure 2 (a) Eye image (b) edge map using canny edge detector (c) results of circular Hough transform and (d) removing the eyelids part

doing that we have to remove those parts from the iris whichare overlapped by eyelids Eyelids were isolated by first fittinga line to the upper and lower eyelid using the linear Houghtransform [30] A second horizontal line is then drawnwhichintersects with the first line at the iris edge that is closest tothe pupil This process is done for both the top and bottomeyelidsThe second horizontal line allowsmaximum isolationof eyelid regions The result of removing the eyelids region isshown in Figure 2(d)

In normalization iris region is transformed into a fixeddimension so that one can compare two different iris imageswith the same spatial location Dimension inconsistenciesof the iris region occur due to stretching of iris causedby pupil dilation for varying levels of light illuminationimage capturing distance angular deflection of camera andeye and so forth Normalization process removes all of

the above-mentioned difficulties to produce a fixed size irisvector Daugmanrsquos Rubber Sheet Model [31] has been used tonormalize the iris region In this process each point of the irisregion converted into a pair of polar coordinates (119903 120579) where119903 is on the interval between 0 and 1 and 120579 is the angle between0 and 2120587 which is shown in Figure 3 Problem can occur forrotation of the eye within the eye socket in iris image Forthis reason the center of the pupil has been considered as thereference point and a number of data points are selected alongeach radial line which is called the radial resolution In thisway we can get the fixed dimension of iris region which isshown in Figure 3(b)

For extracting the feature it is important to extract themost important information presented in the iris regionThere are different alternative techniques that can be used forfeature extraction which includes Gabor filtering techniques

4 Computational Intelligence and Neuroscience

(a) (b)

120579

r

120579

r

Figure 3 (a) Normalization process and (b) iris region with fixeddimension

2-DLog-

Gabor filtering

1

9600

Reduced iris

feature

1

550

1

550

120579

r

1 middot middot middot 96

PCA

Figure 4 Feature encoding and dimensionality reduction process

Zero-crossing 1D wavelet filters Log-Gabor filters and Haarwavelet In this work Log-Gabor filtering technique has beenapplied to extract the iris features effectively 9600 featurevalues have been taken from each iris region PrincipalComponent Analysis method has been used to reduce thedimension of the feature vector where 550 feature values havebeen taken Feature extraction and dimensionality reductionprocess are shown in Figure 4

4 Iris Recognition Using HMM Classifier

In training phase of the proposed iris recognition systemfor each iris 119896 DHMM (Discrete HMM) 120579119896 has been built[32] The model parameters (119860 119861 120579) have been estimated tooptimize the likelihood of the training set observation vectorfor the 119896th iris by using Baum-Welch algorithm The Baum-Welch reestimation formula has been considered as follows[33]

Π119894 = 1205741 (119894)

119886119894119895 =

sum119879minus1

119905=1120585119905 (119894 119895)

sum119879minus1

119905=1120574119905 (119894)

119887119895 () =

sum119879

119905=1(119904119905 119900119905=V119896)120574119905 (119895)

sum119879

119905=1120574119905 (119895)

(1)

where

120585119905 (119894 119895) =

120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

sum119873

119894=1sum119873

119895=1120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

120574119905 (119894) =

119873

sum

119895=1

120585119905 (119894 119895)

(2)

In the testing phase for each unknown iris to be recognizedthe processing shown in Figure 5 has been carried out Thisprocedure includes

Eye image

Iris segmentation

Iris feature extraction

Dimensionality reduction

Probability computation

Probability computation

Probability computation

Select maximum

Recognition result

DHMM for iris number 1

DHMM for

DHMM for iris number 2

Feature conditioning

1205791

1205792

120579k

1 le k le K

iris number k

P(O | 1205792)

P(O | 1205791)

P(O | 120579k)

klowast = arg max [P(O |120579k)]

Figure 5 Block diagram of DHMM based iris recognition system(modified) [10]

(i) measurement of the observation sequence 119874 =

1199001 1199002 119900119899 via a feature analysis of the iris pattern(ii) transforming the continuous values of 119874 into integer

values(iii) calculation of model likelihoods for all possible mod-

els 119875(119874 | 120579119896) 1 le 119896 le 119870(iv) declaration of the iris as 119896lowast person whose model

likelihood is the highestmdashthat is

119896lowast= argmax1le119896le119870

[119875 (119874 | 120579119896)] (3)

In this proposed work the probability computation step hasbeen performed using Baumrsquos Forward-Backward algorithm[33 34]

Medium sized iris dataset has been used for most ofthe recent iris recognition tasks However it is difficultto evaluate the performance accurately with this problemHowever more and more large-scale iris recognition systemsare deployed in real-world applications Many new problemsare met in classification and indexing of large-scale iris imagedatabases So scalability is another challenging issue in irisrecognition CASIA-IrisV4 iris database has been released topromote research on long-range and large-scale iris recogni-tion systems As a result to measure the performance of theoverall iris recognition system CASIA-IrisV4 iris database[35] has been used

CASIA-IrisV4 database was developed and released tothe international biometrics community and updated fromCASIA-IrisV1 to CASIA-IrisV3 since 2002 More than 3000users from 70 countries or regions have downloaded CASIA-Iris and much excellent work on iris recognition has beendone based on these iris image databases CASIA-IrisV4

Computational Intelligence and Neuroscience 5

contains 54601 iris images where 1800 are genuine and 1000are virtual subjects Iris images are collected under nearinfrared illumination or synthesized and represented as 8-bit gray level There are six data sets which were collectedat different times and four different methodologies wereused that is CASIA-Iris-Interval CASIA-Iris-Lamp CASIA-Iris-Distance and CASIA-Iris-Thousand Since CASIA-Iris-Interval is well suited for the detailed texture features ofiris images which are captured by close-up iris camerathis dataset has been used for the experimental work ofthe proposed system The most compelling feature of thisdatabase is to design a circular NIR LED array with suitableluminous flux for iris imaging

For measuring the accuracy of individual left iris andright iris based unimodal recognition system the criticalparamerter that is the number of hidden states of DHMMcan affect the performance of the system A tradeoff is madeto explore the optimum value of the number of hidden statesand comparison results with ReceiverOperatingCharacteris-tics (ROC) curve are shown in Figure 6 which represents theleft and right iris based unimodal recognition performancecombinedly

5 Feature Fusion Based HMM Classifier

Feature level fusion of iris recognition can significantlyimprove the performance of a multibiometric system besidesimproving population coverage deterring spoof attacksincreasing the degrees-of-freedom and reducing the failure-to-enroll rate Although the storage requirements processingtime and the computational demands of feature fusionbased system are much higher than unimodal system for irisrecognition [36] effective integration of left and right irisfeatures can remove or reduce most of the above-mentionedproblems Feature level fusion of left and right iris featuresis an important fusion strategy which can improve overallsystem performance of the proposed system In feature levelfusion sufficient information can exist compared with scorelevel fusion and decision level fusion As a result it canbe expected that feature level fusion can achieve greaterperformance over other fusion strategies of the proposedmultimodal iris recognition system

Feature level fusion can be found by simple concate-nation of the feature sets taken from left and right infor-mation source By concatenating two feature vectors 1198801015840 =1199061015840

1 1199061015840

2 119906

1015840

119898 and1198811015840 = V1015840

1 V10158402 V1015840

119899 a new feature vector

1198801198811015840= 1199061015840

1 1199061015840

2 119906

1015840

119898 V10158401 V10158402 V1015840

119899 119880119881 isin 119877

119898+119899 has beencreated The objective is to combine these two feature sets inorder to create a new feature vector 119880119881 that would betterrepresent the individual To combine left and right iris featurevectors the dimensionality of new feature vector is very largeAs a result dimensionality reduction technique is necessaryto reduce the searching domain of learned database Thefeature selection process chooses a minimal feature set of 119896where 119896 lt (119898+119899) that improves the performance on a trainedset of feature vectors The objective of this phase is to findout optimal subset of features from the complete feature setPrincipal Component Analysis [37] has been used to reduce

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 6 ROC curve of left iris and right iris based unimodalsystem

Feature fusion of left and right iris

DHMM classifier

Right iris feature

feature Left iris

Iris recognition

PCA based dimensionality

reduction

1

9600

1

19200

1

1100

1 19200

1

9600

middot middot middot

Figure 7 Feature fusion based multimodal iris recognition system

the dimensionality of the feature vector Figure 7 shows theprocess of left and right iris feature fusion based multimodaltechnique of the proposed system

In feature level fusion optimum value of the numberof hidden states of DHMM has been chosen and Figure 8shows the comparison results with ROC curve Figure 9shows the performance comparison among unimodal leftiris unimodal right iris and left-right iris feature fusionbased recognition Since the primary goal of left-right irisfeature fusion based multimodal iris recognition system is toachieve the performance which is equal to or better than theperformance of any left or right unimodal iris recognitionsystem When the noise level is high of right iris the leftiris unimodal system performs better than the right irisunimodality thus the left-right iris recognition performanceshould be at least as good as that of the right iris unimodalsystem When the noise level is high of left iris the rightiris recognition performance is better than the left one andthe integrated performance should be at least the same as orbetter than the performance of the right iris recognitionThesystem also works very well when left and right iris imagedo not contain noises Figure 9 shows the above-mentioned

6 Computational Intelligence and Neuroscience

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 8 Recognition performance of left-right iris feature fusionbased multimodal system among different number of hidden statesof DHMM

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 9 Performance comparison among unimodal left iris uni-modal right iris and left-right iris feature fusion based multimodalrecognition system

performance of the left-right feature fusion basedmultimodaliris recognition

6 Likelihood Ratio Score Fusion BasedHMM Classifier

In left-right iris feature fusion based method the left irisand right iris features from all modalities are combined intoone high dimensional vector But the method considers allmodalities with equal weight and it is the main disadvantageof the feature fusion basedmultimodalmethodThis problemcan be solved by using different weights according to differentnoise conditions of the left and right iris modalities Themethod of score fusion by assigning weights of eachmodalitycan be used for this purpose This also allows the dynamic

adjustment of the importance of each stream through theweights according to its estimated reliability

Left-right iris likelihood ratio based score fusion methodis a score fusion technique where the reliability of eachmodality ismeasured by using the output ofDHMMclassifierfor both the left and right iris features If one of themodalitiesbecomes corrupted by noise the othermodality filling the gapby making its weight and the recognition rate will increaseto use this score fusion based method This is also valid inthe case of a complete interruption of one stream In thiscase the corrupted modality should be close to maximumand the weight assigned to the missing stream close to zeroThis practically makes the system revert to single streamrecognition automatically This process is instantaneous andalso reversible that is if the missing stream is restored themodality would decrease and the weight would increase tothe level before the interruptionThe integrated weight whichdetermines the amount of contribution from each modalityin left-right iris score fusion based recognition system iscalculated from the relative reliability of the two modalitiesFigure 10 shows the process of using likelihood ratio scorefusion based pair of iris recognition systems

To calculate the score fusion result theDHMMoutputs ofindividual iris (ie left and right) recognition are combinedby a weighted sum rule to produce the final score fusionresult For a given left-right iris test datum of 119874119871 and 119874119877 therecognition utterance 119862lowast is given by [38]

119862lowast= argmax

119894

120574 log119875(119874119871120582119894

119871

)

+ (1 minus 120574) log119875(119874119877

120582119894

119877

)

(4)

where 120582119894119871and 120582119894

119877are the left iris and the right iris DHMMs

for the 119894th utterance class respectively and log119875(119874119871120582119894

119871) and

log119875(119874119877120582119894

119877) are their log-likelihood against the 119894th class

The weighting factor 120574 determines the contribution ofeach modality for the final decision From the two mostpopular integration approaches such as baseline reliabilityratio based integration and 119873-best recognition hypothesisreliability ratio based integration baseline reliability ratiobased integration has been used where the integration weightis calculated from the reliability of each individual modalityThe reliability of each modality can be calculated by the mostappropriate and best in performance as [39]

119878119898 =1

119873 minus 1

119873

sum

119894=1

(max119895

log119875( 119874120582119895) minus log119875(119874

120582119894)) (5)

which means the average difference between the maximumlog-likelihood and the other ones are used to determine thereliability of each modality119873 is the number of classes beingconsidered to measure the reliability of each modality 119898 isin

119871 119877Then the integration weight of left iris reliability measure

120574119871 can be calculated by [40]

120574119871 =119878119871

119878119871 + 119878119877

(6)

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Computational Intelligence and Neuroscience 3

Feature fusion of left and right

iris

DHMM classifier

DHMM classifier

DHMM classifier

Voting method

Reliability measurement of right iris

Reliability measurement of left iris

Likelihood ratio based score fusion

Iris feature encoding

and feature extraction

Iris feature encoding

and feature extraction

Iris recognition

Left eye image

PCA baseddimensionality

reduction

PCA based dimensionality

reduction

PCA based dimensionality

reduction

Right eye image

Figure 1 Block diagram of the proposed feature and decision fusion based Multiple Classifier Selection for iris recognition

(a) (b)

(c) (d)

Figure 2 (a) Eye image (b) edge map using canny edge detector (c) results of circular Hough transform and (d) removing the eyelids part

doing that we have to remove those parts from the iris whichare overlapped by eyelids Eyelids were isolated by first fittinga line to the upper and lower eyelid using the linear Houghtransform [30] A second horizontal line is then drawnwhichintersects with the first line at the iris edge that is closest tothe pupil This process is done for both the top and bottomeyelidsThe second horizontal line allowsmaximum isolationof eyelid regions The result of removing the eyelids region isshown in Figure 2(d)

In normalization iris region is transformed into a fixeddimension so that one can compare two different iris imageswith the same spatial location Dimension inconsistenciesof the iris region occur due to stretching of iris causedby pupil dilation for varying levels of light illuminationimage capturing distance angular deflection of camera andeye and so forth Normalization process removes all of

the above-mentioned difficulties to produce a fixed size irisvector Daugmanrsquos Rubber Sheet Model [31] has been used tonormalize the iris region In this process each point of the irisregion converted into a pair of polar coordinates (119903 120579) where119903 is on the interval between 0 and 1 and 120579 is the angle between0 and 2120587 which is shown in Figure 3 Problem can occur forrotation of the eye within the eye socket in iris image Forthis reason the center of the pupil has been considered as thereference point and a number of data points are selected alongeach radial line which is called the radial resolution In thisway we can get the fixed dimension of iris region which isshown in Figure 3(b)

For extracting the feature it is important to extract themost important information presented in the iris regionThere are different alternative techniques that can be used forfeature extraction which includes Gabor filtering techniques

4 Computational Intelligence and Neuroscience

(a) (b)

120579

r

120579

r

Figure 3 (a) Normalization process and (b) iris region with fixeddimension

2-DLog-

Gabor filtering

1

9600

Reduced iris

feature

1

550

1

550

120579

r

1 middot middot middot 96

PCA

Figure 4 Feature encoding and dimensionality reduction process

Zero-crossing 1D wavelet filters Log-Gabor filters and Haarwavelet In this work Log-Gabor filtering technique has beenapplied to extract the iris features effectively 9600 featurevalues have been taken from each iris region PrincipalComponent Analysis method has been used to reduce thedimension of the feature vector where 550 feature values havebeen taken Feature extraction and dimensionality reductionprocess are shown in Figure 4

4 Iris Recognition Using HMM Classifier

In training phase of the proposed iris recognition systemfor each iris 119896 DHMM (Discrete HMM) 120579119896 has been built[32] The model parameters (119860 119861 120579) have been estimated tooptimize the likelihood of the training set observation vectorfor the 119896th iris by using Baum-Welch algorithm The Baum-Welch reestimation formula has been considered as follows[33]

Π119894 = 1205741 (119894)

119886119894119895 =

sum119879minus1

119905=1120585119905 (119894 119895)

sum119879minus1

119905=1120574119905 (119894)

119887119895 () =

sum119879

119905=1(119904119905 119900119905=V119896)120574119905 (119895)

sum119879

119905=1120574119905 (119895)

(1)

where

120585119905 (119894 119895) =

120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

sum119873

119894=1sum119873

119895=1120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

120574119905 (119894) =

119873

sum

119895=1

120585119905 (119894 119895)

(2)

In the testing phase for each unknown iris to be recognizedthe processing shown in Figure 5 has been carried out Thisprocedure includes

Eye image

Iris segmentation

Iris feature extraction

Dimensionality reduction

Probability computation

Probability computation

Probability computation

Select maximum

Recognition result

DHMM for iris number 1

DHMM for

DHMM for iris number 2

Feature conditioning

1205791

1205792

120579k

1 le k le K

iris number k

P(O | 1205792)

P(O | 1205791)

P(O | 120579k)

klowast = arg max [P(O |120579k)]

Figure 5 Block diagram of DHMM based iris recognition system(modified) [10]

(i) measurement of the observation sequence 119874 =

1199001 1199002 119900119899 via a feature analysis of the iris pattern(ii) transforming the continuous values of 119874 into integer

values(iii) calculation of model likelihoods for all possible mod-

els 119875(119874 | 120579119896) 1 le 119896 le 119870(iv) declaration of the iris as 119896lowast person whose model

likelihood is the highestmdashthat is

119896lowast= argmax1le119896le119870

[119875 (119874 | 120579119896)] (3)

In this proposed work the probability computation step hasbeen performed using Baumrsquos Forward-Backward algorithm[33 34]

Medium sized iris dataset has been used for most ofthe recent iris recognition tasks However it is difficultto evaluate the performance accurately with this problemHowever more and more large-scale iris recognition systemsare deployed in real-world applications Many new problemsare met in classification and indexing of large-scale iris imagedatabases So scalability is another challenging issue in irisrecognition CASIA-IrisV4 iris database has been released topromote research on long-range and large-scale iris recogni-tion systems As a result to measure the performance of theoverall iris recognition system CASIA-IrisV4 iris database[35] has been used

CASIA-IrisV4 database was developed and released tothe international biometrics community and updated fromCASIA-IrisV1 to CASIA-IrisV3 since 2002 More than 3000users from 70 countries or regions have downloaded CASIA-Iris and much excellent work on iris recognition has beendone based on these iris image databases CASIA-IrisV4

Computational Intelligence and Neuroscience 5

contains 54601 iris images where 1800 are genuine and 1000are virtual subjects Iris images are collected under nearinfrared illumination or synthesized and represented as 8-bit gray level There are six data sets which were collectedat different times and four different methodologies wereused that is CASIA-Iris-Interval CASIA-Iris-Lamp CASIA-Iris-Distance and CASIA-Iris-Thousand Since CASIA-Iris-Interval is well suited for the detailed texture features ofiris images which are captured by close-up iris camerathis dataset has been used for the experimental work ofthe proposed system The most compelling feature of thisdatabase is to design a circular NIR LED array with suitableluminous flux for iris imaging

For measuring the accuracy of individual left iris andright iris based unimodal recognition system the criticalparamerter that is the number of hidden states of DHMMcan affect the performance of the system A tradeoff is madeto explore the optimum value of the number of hidden statesand comparison results with ReceiverOperatingCharacteris-tics (ROC) curve are shown in Figure 6 which represents theleft and right iris based unimodal recognition performancecombinedly

5 Feature Fusion Based HMM Classifier

Feature level fusion of iris recognition can significantlyimprove the performance of a multibiometric system besidesimproving population coverage deterring spoof attacksincreasing the degrees-of-freedom and reducing the failure-to-enroll rate Although the storage requirements processingtime and the computational demands of feature fusionbased system are much higher than unimodal system for irisrecognition [36] effective integration of left and right irisfeatures can remove or reduce most of the above-mentionedproblems Feature level fusion of left and right iris featuresis an important fusion strategy which can improve overallsystem performance of the proposed system In feature levelfusion sufficient information can exist compared with scorelevel fusion and decision level fusion As a result it canbe expected that feature level fusion can achieve greaterperformance over other fusion strategies of the proposedmultimodal iris recognition system

Feature level fusion can be found by simple concate-nation of the feature sets taken from left and right infor-mation source By concatenating two feature vectors 1198801015840 =1199061015840

1 1199061015840

2 119906

1015840

119898 and1198811015840 = V1015840

1 V10158402 V1015840

119899 a new feature vector

1198801198811015840= 1199061015840

1 1199061015840

2 119906

1015840

119898 V10158401 V10158402 V1015840

119899 119880119881 isin 119877

119898+119899 has beencreated The objective is to combine these two feature sets inorder to create a new feature vector 119880119881 that would betterrepresent the individual To combine left and right iris featurevectors the dimensionality of new feature vector is very largeAs a result dimensionality reduction technique is necessaryto reduce the searching domain of learned database Thefeature selection process chooses a minimal feature set of 119896where 119896 lt (119898+119899) that improves the performance on a trainedset of feature vectors The objective of this phase is to findout optimal subset of features from the complete feature setPrincipal Component Analysis [37] has been used to reduce

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 6 ROC curve of left iris and right iris based unimodalsystem

Feature fusion of left and right iris

DHMM classifier

Right iris feature

feature Left iris

Iris recognition

PCA based dimensionality

reduction

1

9600

1

19200

1

1100

1 19200

1

9600

middot middot middot

Figure 7 Feature fusion based multimodal iris recognition system

the dimensionality of the feature vector Figure 7 shows theprocess of left and right iris feature fusion based multimodaltechnique of the proposed system

In feature level fusion optimum value of the numberof hidden states of DHMM has been chosen and Figure 8shows the comparison results with ROC curve Figure 9shows the performance comparison among unimodal leftiris unimodal right iris and left-right iris feature fusionbased recognition Since the primary goal of left-right irisfeature fusion based multimodal iris recognition system is toachieve the performance which is equal to or better than theperformance of any left or right unimodal iris recognitionsystem When the noise level is high of right iris the leftiris unimodal system performs better than the right irisunimodality thus the left-right iris recognition performanceshould be at least as good as that of the right iris unimodalsystem When the noise level is high of left iris the rightiris recognition performance is better than the left one andthe integrated performance should be at least the same as orbetter than the performance of the right iris recognitionThesystem also works very well when left and right iris imagedo not contain noises Figure 9 shows the above-mentioned

6 Computational Intelligence and Neuroscience

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 8 Recognition performance of left-right iris feature fusionbased multimodal system among different number of hidden statesof DHMM

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 9 Performance comparison among unimodal left iris uni-modal right iris and left-right iris feature fusion based multimodalrecognition system

performance of the left-right feature fusion basedmultimodaliris recognition

6 Likelihood Ratio Score Fusion BasedHMM Classifier

In left-right iris feature fusion based method the left irisand right iris features from all modalities are combined intoone high dimensional vector But the method considers allmodalities with equal weight and it is the main disadvantageof the feature fusion basedmultimodalmethodThis problemcan be solved by using different weights according to differentnoise conditions of the left and right iris modalities Themethod of score fusion by assigning weights of eachmodalitycan be used for this purpose This also allows the dynamic

adjustment of the importance of each stream through theweights according to its estimated reliability

Left-right iris likelihood ratio based score fusion methodis a score fusion technique where the reliability of eachmodality ismeasured by using the output ofDHMMclassifierfor both the left and right iris features If one of themodalitiesbecomes corrupted by noise the othermodality filling the gapby making its weight and the recognition rate will increaseto use this score fusion based method This is also valid inthe case of a complete interruption of one stream In thiscase the corrupted modality should be close to maximumand the weight assigned to the missing stream close to zeroThis practically makes the system revert to single streamrecognition automatically This process is instantaneous andalso reversible that is if the missing stream is restored themodality would decrease and the weight would increase tothe level before the interruptionThe integrated weight whichdetermines the amount of contribution from each modalityin left-right iris score fusion based recognition system iscalculated from the relative reliability of the two modalitiesFigure 10 shows the process of using likelihood ratio scorefusion based pair of iris recognition systems

To calculate the score fusion result theDHMMoutputs ofindividual iris (ie left and right) recognition are combinedby a weighted sum rule to produce the final score fusionresult For a given left-right iris test datum of 119874119871 and 119874119877 therecognition utterance 119862lowast is given by [38]

119862lowast= argmax

119894

120574 log119875(119874119871120582119894

119871

)

+ (1 minus 120574) log119875(119874119877

120582119894

119877

)

(4)

where 120582119894119871and 120582119894

119877are the left iris and the right iris DHMMs

for the 119894th utterance class respectively and log119875(119874119871120582119894

119871) and

log119875(119874119877120582119894

119877) are their log-likelihood against the 119894th class

The weighting factor 120574 determines the contribution ofeach modality for the final decision From the two mostpopular integration approaches such as baseline reliabilityratio based integration and 119873-best recognition hypothesisreliability ratio based integration baseline reliability ratiobased integration has been used where the integration weightis calculated from the reliability of each individual modalityThe reliability of each modality can be calculated by the mostappropriate and best in performance as [39]

119878119898 =1

119873 minus 1

119873

sum

119894=1

(max119895

log119875( 119874120582119895) minus log119875(119874

120582119894)) (5)

which means the average difference between the maximumlog-likelihood and the other ones are used to determine thereliability of each modality119873 is the number of classes beingconsidered to measure the reliability of each modality 119898 isin

119871 119877Then the integration weight of left iris reliability measure

120574119871 can be calculated by [40]

120574119871 =119878119871

119878119871 + 119878119877

(6)

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

4 Computational Intelligence and Neuroscience

(a) (b)

120579

r

120579

r

Figure 3 (a) Normalization process and (b) iris region with fixeddimension

2-DLog-

Gabor filtering

1

9600

Reduced iris

feature

1

550

1

550

120579

r

1 middot middot middot 96

PCA

Figure 4 Feature encoding and dimensionality reduction process

Zero-crossing 1D wavelet filters Log-Gabor filters and Haarwavelet In this work Log-Gabor filtering technique has beenapplied to extract the iris features effectively 9600 featurevalues have been taken from each iris region PrincipalComponent Analysis method has been used to reduce thedimension of the feature vector where 550 feature values havebeen taken Feature extraction and dimensionality reductionprocess are shown in Figure 4

4 Iris Recognition Using HMM Classifier

In training phase of the proposed iris recognition systemfor each iris 119896 DHMM (Discrete HMM) 120579119896 has been built[32] The model parameters (119860 119861 120579) have been estimated tooptimize the likelihood of the training set observation vectorfor the 119896th iris by using Baum-Welch algorithm The Baum-Welch reestimation formula has been considered as follows[33]

Π119894 = 1205741 (119894)

119886119894119895 =

sum119879minus1

119905=1120585119905 (119894 119895)

sum119879minus1

119905=1120574119905 (119894)

119887119895 () =

sum119879

119905=1(119904119905 119900119905=V119896)120574119905 (119895)

sum119879

119905=1120574119905 (119895)

(1)

where

120585119905 (119894 119895) =

120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

sum119873

119894=1sum119873

119895=1120572119905 (119894) 119886119894119895119887119895 (119900119905+1) 120573119905+1 (119895)

120574119905 (119894) =

119873

sum

119895=1

120585119905 (119894 119895)

(2)

In the testing phase for each unknown iris to be recognizedthe processing shown in Figure 5 has been carried out Thisprocedure includes

Eye image

Iris segmentation

Iris feature extraction

Dimensionality reduction

Probability computation

Probability computation

Probability computation

Select maximum

Recognition result

DHMM for iris number 1

DHMM for

DHMM for iris number 2

Feature conditioning

1205791

1205792

120579k

1 le k le K

iris number k

P(O | 1205792)

P(O | 1205791)

P(O | 120579k)

klowast = arg max [P(O |120579k)]

Figure 5 Block diagram of DHMM based iris recognition system(modified) [10]

(i) measurement of the observation sequence 119874 =

1199001 1199002 119900119899 via a feature analysis of the iris pattern(ii) transforming the continuous values of 119874 into integer

values(iii) calculation of model likelihoods for all possible mod-

els 119875(119874 | 120579119896) 1 le 119896 le 119870(iv) declaration of the iris as 119896lowast person whose model

likelihood is the highestmdashthat is

119896lowast= argmax1le119896le119870

[119875 (119874 | 120579119896)] (3)

In this proposed work the probability computation step hasbeen performed using Baumrsquos Forward-Backward algorithm[33 34]

Medium sized iris dataset has been used for most ofthe recent iris recognition tasks However it is difficultto evaluate the performance accurately with this problemHowever more and more large-scale iris recognition systemsare deployed in real-world applications Many new problemsare met in classification and indexing of large-scale iris imagedatabases So scalability is another challenging issue in irisrecognition CASIA-IrisV4 iris database has been released topromote research on long-range and large-scale iris recogni-tion systems As a result to measure the performance of theoverall iris recognition system CASIA-IrisV4 iris database[35] has been used

CASIA-IrisV4 database was developed and released tothe international biometrics community and updated fromCASIA-IrisV1 to CASIA-IrisV3 since 2002 More than 3000users from 70 countries or regions have downloaded CASIA-Iris and much excellent work on iris recognition has beendone based on these iris image databases CASIA-IrisV4

Computational Intelligence and Neuroscience 5

contains 54601 iris images where 1800 are genuine and 1000are virtual subjects Iris images are collected under nearinfrared illumination or synthesized and represented as 8-bit gray level There are six data sets which were collectedat different times and four different methodologies wereused that is CASIA-Iris-Interval CASIA-Iris-Lamp CASIA-Iris-Distance and CASIA-Iris-Thousand Since CASIA-Iris-Interval is well suited for the detailed texture features ofiris images which are captured by close-up iris camerathis dataset has been used for the experimental work ofthe proposed system The most compelling feature of thisdatabase is to design a circular NIR LED array with suitableluminous flux for iris imaging

For measuring the accuracy of individual left iris andright iris based unimodal recognition system the criticalparamerter that is the number of hidden states of DHMMcan affect the performance of the system A tradeoff is madeto explore the optimum value of the number of hidden statesand comparison results with ReceiverOperatingCharacteris-tics (ROC) curve are shown in Figure 6 which represents theleft and right iris based unimodal recognition performancecombinedly

5 Feature Fusion Based HMM Classifier

Feature level fusion of iris recognition can significantlyimprove the performance of a multibiometric system besidesimproving population coverage deterring spoof attacksincreasing the degrees-of-freedom and reducing the failure-to-enroll rate Although the storage requirements processingtime and the computational demands of feature fusionbased system are much higher than unimodal system for irisrecognition [36] effective integration of left and right irisfeatures can remove or reduce most of the above-mentionedproblems Feature level fusion of left and right iris featuresis an important fusion strategy which can improve overallsystem performance of the proposed system In feature levelfusion sufficient information can exist compared with scorelevel fusion and decision level fusion As a result it canbe expected that feature level fusion can achieve greaterperformance over other fusion strategies of the proposedmultimodal iris recognition system

Feature level fusion can be found by simple concate-nation of the feature sets taken from left and right infor-mation source By concatenating two feature vectors 1198801015840 =1199061015840

1 1199061015840

2 119906

1015840

119898 and1198811015840 = V1015840

1 V10158402 V1015840

119899 a new feature vector

1198801198811015840= 1199061015840

1 1199061015840

2 119906

1015840

119898 V10158401 V10158402 V1015840

119899 119880119881 isin 119877

119898+119899 has beencreated The objective is to combine these two feature sets inorder to create a new feature vector 119880119881 that would betterrepresent the individual To combine left and right iris featurevectors the dimensionality of new feature vector is very largeAs a result dimensionality reduction technique is necessaryto reduce the searching domain of learned database Thefeature selection process chooses a minimal feature set of 119896where 119896 lt (119898+119899) that improves the performance on a trainedset of feature vectors The objective of this phase is to findout optimal subset of features from the complete feature setPrincipal Component Analysis [37] has been used to reduce

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 6 ROC curve of left iris and right iris based unimodalsystem

Feature fusion of left and right iris

DHMM classifier

Right iris feature

feature Left iris

Iris recognition

PCA based dimensionality

reduction

1

9600

1

19200

1

1100

1 19200

1

9600

middot middot middot

Figure 7 Feature fusion based multimodal iris recognition system

the dimensionality of the feature vector Figure 7 shows theprocess of left and right iris feature fusion based multimodaltechnique of the proposed system

In feature level fusion optimum value of the numberof hidden states of DHMM has been chosen and Figure 8shows the comparison results with ROC curve Figure 9shows the performance comparison among unimodal leftiris unimodal right iris and left-right iris feature fusionbased recognition Since the primary goal of left-right irisfeature fusion based multimodal iris recognition system is toachieve the performance which is equal to or better than theperformance of any left or right unimodal iris recognitionsystem When the noise level is high of right iris the leftiris unimodal system performs better than the right irisunimodality thus the left-right iris recognition performanceshould be at least as good as that of the right iris unimodalsystem When the noise level is high of left iris the rightiris recognition performance is better than the left one andthe integrated performance should be at least the same as orbetter than the performance of the right iris recognitionThesystem also works very well when left and right iris imagedo not contain noises Figure 9 shows the above-mentioned

6 Computational Intelligence and Neuroscience

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 8 Recognition performance of left-right iris feature fusionbased multimodal system among different number of hidden statesof DHMM

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 9 Performance comparison among unimodal left iris uni-modal right iris and left-right iris feature fusion based multimodalrecognition system

performance of the left-right feature fusion basedmultimodaliris recognition

6 Likelihood Ratio Score Fusion BasedHMM Classifier

In left-right iris feature fusion based method the left irisand right iris features from all modalities are combined intoone high dimensional vector But the method considers allmodalities with equal weight and it is the main disadvantageof the feature fusion basedmultimodalmethodThis problemcan be solved by using different weights according to differentnoise conditions of the left and right iris modalities Themethod of score fusion by assigning weights of eachmodalitycan be used for this purpose This also allows the dynamic

adjustment of the importance of each stream through theweights according to its estimated reliability

Left-right iris likelihood ratio based score fusion methodis a score fusion technique where the reliability of eachmodality ismeasured by using the output ofDHMMclassifierfor both the left and right iris features If one of themodalitiesbecomes corrupted by noise the othermodality filling the gapby making its weight and the recognition rate will increaseto use this score fusion based method This is also valid inthe case of a complete interruption of one stream In thiscase the corrupted modality should be close to maximumand the weight assigned to the missing stream close to zeroThis practically makes the system revert to single streamrecognition automatically This process is instantaneous andalso reversible that is if the missing stream is restored themodality would decrease and the weight would increase tothe level before the interruptionThe integrated weight whichdetermines the amount of contribution from each modalityin left-right iris score fusion based recognition system iscalculated from the relative reliability of the two modalitiesFigure 10 shows the process of using likelihood ratio scorefusion based pair of iris recognition systems

To calculate the score fusion result theDHMMoutputs ofindividual iris (ie left and right) recognition are combinedby a weighted sum rule to produce the final score fusionresult For a given left-right iris test datum of 119874119871 and 119874119877 therecognition utterance 119862lowast is given by [38]

119862lowast= argmax

119894

120574 log119875(119874119871120582119894

119871

)

+ (1 minus 120574) log119875(119874119877

120582119894

119877

)

(4)

where 120582119894119871and 120582119894

119877are the left iris and the right iris DHMMs

for the 119894th utterance class respectively and log119875(119874119871120582119894

119871) and

log119875(119874119877120582119894

119877) are their log-likelihood against the 119894th class

The weighting factor 120574 determines the contribution ofeach modality for the final decision From the two mostpopular integration approaches such as baseline reliabilityratio based integration and 119873-best recognition hypothesisreliability ratio based integration baseline reliability ratiobased integration has been used where the integration weightis calculated from the reliability of each individual modalityThe reliability of each modality can be calculated by the mostappropriate and best in performance as [39]

119878119898 =1

119873 minus 1

119873

sum

119894=1

(max119895

log119875( 119874120582119895) minus log119875(119874

120582119894)) (5)

which means the average difference between the maximumlog-likelihood and the other ones are used to determine thereliability of each modality119873 is the number of classes beingconsidered to measure the reliability of each modality 119898 isin

119871 119877Then the integration weight of left iris reliability measure

120574119871 can be calculated by [40]

120574119871 =119878119871

119878119871 + 119878119877

(6)

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Computational Intelligence and Neuroscience 5

contains 54601 iris images where 1800 are genuine and 1000are virtual subjects Iris images are collected under nearinfrared illumination or synthesized and represented as 8-bit gray level There are six data sets which were collectedat different times and four different methodologies wereused that is CASIA-Iris-Interval CASIA-Iris-Lamp CASIA-Iris-Distance and CASIA-Iris-Thousand Since CASIA-Iris-Interval is well suited for the detailed texture features ofiris images which are captured by close-up iris camerathis dataset has been used for the experimental work ofthe proposed system The most compelling feature of thisdatabase is to design a circular NIR LED array with suitableluminous flux for iris imaging

For measuring the accuracy of individual left iris andright iris based unimodal recognition system the criticalparamerter that is the number of hidden states of DHMMcan affect the performance of the system A tradeoff is madeto explore the optimum value of the number of hidden statesand comparison results with ReceiverOperatingCharacteris-tics (ROC) curve are shown in Figure 6 which represents theleft and right iris based unimodal recognition performancecombinedly

5 Feature Fusion Based HMM Classifier

Feature level fusion of iris recognition can significantlyimprove the performance of a multibiometric system besidesimproving population coverage deterring spoof attacksincreasing the degrees-of-freedom and reducing the failure-to-enroll rate Although the storage requirements processingtime and the computational demands of feature fusionbased system are much higher than unimodal system for irisrecognition [36] effective integration of left and right irisfeatures can remove or reduce most of the above-mentionedproblems Feature level fusion of left and right iris featuresis an important fusion strategy which can improve overallsystem performance of the proposed system In feature levelfusion sufficient information can exist compared with scorelevel fusion and decision level fusion As a result it canbe expected that feature level fusion can achieve greaterperformance over other fusion strategies of the proposedmultimodal iris recognition system

Feature level fusion can be found by simple concate-nation of the feature sets taken from left and right infor-mation source By concatenating two feature vectors 1198801015840 =1199061015840

1 1199061015840

2 119906

1015840

119898 and1198811015840 = V1015840

1 V10158402 V1015840

119899 a new feature vector

1198801198811015840= 1199061015840

1 1199061015840

2 119906

1015840

119898 V10158401 V10158402 V1015840

119899 119880119881 isin 119877

119898+119899 has beencreated The objective is to combine these two feature sets inorder to create a new feature vector 119880119881 that would betterrepresent the individual To combine left and right iris featurevectors the dimensionality of new feature vector is very largeAs a result dimensionality reduction technique is necessaryto reduce the searching domain of learned database Thefeature selection process chooses a minimal feature set of 119896where 119896 lt (119898+119899) that improves the performance on a trainedset of feature vectors The objective of this phase is to findout optimal subset of features from the complete feature setPrincipal Component Analysis [37] has been used to reduce

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 6 ROC curve of left iris and right iris based unimodalsystem

Feature fusion of left and right iris

DHMM classifier

Right iris feature

feature Left iris

Iris recognition

PCA based dimensionality

reduction

1

9600

1

19200

1

1100

1 19200

1

9600

middot middot middot

Figure 7 Feature fusion based multimodal iris recognition system

the dimensionality of the feature vector Figure 7 shows theprocess of left and right iris feature fusion based multimodaltechnique of the proposed system

In feature level fusion optimum value of the numberof hidden states of DHMM has been chosen and Figure 8shows the comparison results with ROC curve Figure 9shows the performance comparison among unimodal leftiris unimodal right iris and left-right iris feature fusionbased recognition Since the primary goal of left-right irisfeature fusion based multimodal iris recognition system is toachieve the performance which is equal to or better than theperformance of any left or right unimodal iris recognitionsystem When the noise level is high of right iris the leftiris unimodal system performs better than the right irisunimodality thus the left-right iris recognition performanceshould be at least as good as that of the right iris unimodalsystem When the noise level is high of left iris the rightiris recognition performance is better than the left one andthe integrated performance should be at least the same as orbetter than the performance of the right iris recognitionThesystem also works very well when left and right iris imagedo not contain noises Figure 9 shows the above-mentioned

6 Computational Intelligence and Neuroscience

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 8 Recognition performance of left-right iris feature fusionbased multimodal system among different number of hidden statesof DHMM

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 9 Performance comparison among unimodal left iris uni-modal right iris and left-right iris feature fusion based multimodalrecognition system

performance of the left-right feature fusion basedmultimodaliris recognition

6 Likelihood Ratio Score Fusion BasedHMM Classifier

In left-right iris feature fusion based method the left irisand right iris features from all modalities are combined intoone high dimensional vector But the method considers allmodalities with equal weight and it is the main disadvantageof the feature fusion basedmultimodalmethodThis problemcan be solved by using different weights according to differentnoise conditions of the left and right iris modalities Themethod of score fusion by assigning weights of eachmodalitycan be used for this purpose This also allows the dynamic

adjustment of the importance of each stream through theweights according to its estimated reliability

Left-right iris likelihood ratio based score fusion methodis a score fusion technique where the reliability of eachmodality ismeasured by using the output ofDHMMclassifierfor both the left and right iris features If one of themodalitiesbecomes corrupted by noise the othermodality filling the gapby making its weight and the recognition rate will increaseto use this score fusion based method This is also valid inthe case of a complete interruption of one stream In thiscase the corrupted modality should be close to maximumand the weight assigned to the missing stream close to zeroThis practically makes the system revert to single streamrecognition automatically This process is instantaneous andalso reversible that is if the missing stream is restored themodality would decrease and the weight would increase tothe level before the interruptionThe integrated weight whichdetermines the amount of contribution from each modalityin left-right iris score fusion based recognition system iscalculated from the relative reliability of the two modalitiesFigure 10 shows the process of using likelihood ratio scorefusion based pair of iris recognition systems

To calculate the score fusion result theDHMMoutputs ofindividual iris (ie left and right) recognition are combinedby a weighted sum rule to produce the final score fusionresult For a given left-right iris test datum of 119874119871 and 119874119877 therecognition utterance 119862lowast is given by [38]

119862lowast= argmax

119894

120574 log119875(119874119871120582119894

119871

)

+ (1 minus 120574) log119875(119874119877

120582119894

119877

)

(4)

where 120582119894119871and 120582119894

119877are the left iris and the right iris DHMMs

for the 119894th utterance class respectively and log119875(119874119871120582119894

119871) and

log119875(119874119877120582119894

119877) are their log-likelihood against the 119894th class

The weighting factor 120574 determines the contribution ofeach modality for the final decision From the two mostpopular integration approaches such as baseline reliabilityratio based integration and 119873-best recognition hypothesisreliability ratio based integration baseline reliability ratiobased integration has been used where the integration weightis calculated from the reliability of each individual modalityThe reliability of each modality can be calculated by the mostappropriate and best in performance as [39]

119878119898 =1

119873 minus 1

119873

sum

119894=1

(max119895

log119875( 119874120582119895) minus log119875(119874

120582119894)) (5)

which means the average difference between the maximumlog-likelihood and the other ones are used to determine thereliability of each modality119873 is the number of classes beingconsidered to measure the reliability of each modality 119898 isin

119871 119877Then the integration weight of left iris reliability measure

120574119871 can be calculated by [40]

120574119871 =119878119871

119878119871 + 119878119877

(6)

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

6 Computational Intelligence and Neuroscience

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

5 hidden states

10 hidden states15 hidden states20 hidden states25 hidden states

Figure 8 Recognition performance of left-right iris feature fusionbased multimodal system among different number of hidden statesof DHMM

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 9 Performance comparison among unimodal left iris uni-modal right iris and left-right iris feature fusion based multimodalrecognition system

performance of the left-right feature fusion basedmultimodaliris recognition

6 Likelihood Ratio Score Fusion BasedHMM Classifier

In left-right iris feature fusion based method the left irisand right iris features from all modalities are combined intoone high dimensional vector But the method considers allmodalities with equal weight and it is the main disadvantageof the feature fusion basedmultimodalmethodThis problemcan be solved by using different weights according to differentnoise conditions of the left and right iris modalities Themethod of score fusion by assigning weights of eachmodalitycan be used for this purpose This also allows the dynamic

adjustment of the importance of each stream through theweights according to its estimated reliability

Left-right iris likelihood ratio based score fusion methodis a score fusion technique where the reliability of eachmodality ismeasured by using the output ofDHMMclassifierfor both the left and right iris features If one of themodalitiesbecomes corrupted by noise the othermodality filling the gapby making its weight and the recognition rate will increaseto use this score fusion based method This is also valid inthe case of a complete interruption of one stream In thiscase the corrupted modality should be close to maximumand the weight assigned to the missing stream close to zeroThis practically makes the system revert to single streamrecognition automatically This process is instantaneous andalso reversible that is if the missing stream is restored themodality would decrease and the weight would increase tothe level before the interruptionThe integrated weight whichdetermines the amount of contribution from each modalityin left-right iris score fusion based recognition system iscalculated from the relative reliability of the two modalitiesFigure 10 shows the process of using likelihood ratio scorefusion based pair of iris recognition systems

To calculate the score fusion result theDHMMoutputs ofindividual iris (ie left and right) recognition are combinedby a weighted sum rule to produce the final score fusionresult For a given left-right iris test datum of 119874119871 and 119874119877 therecognition utterance 119862lowast is given by [38]

119862lowast= argmax

119894

120574 log119875(119874119871120582119894

119871

)

+ (1 minus 120574) log119875(119874119877

120582119894

119877

)

(4)

where 120582119894119871and 120582119894

119877are the left iris and the right iris DHMMs

for the 119894th utterance class respectively and log119875(119874119871120582119894

119871) and

log119875(119874119877120582119894

119877) are their log-likelihood against the 119894th class

The weighting factor 120574 determines the contribution ofeach modality for the final decision From the two mostpopular integration approaches such as baseline reliabilityratio based integration and 119873-best recognition hypothesisreliability ratio based integration baseline reliability ratiobased integration has been used where the integration weightis calculated from the reliability of each individual modalityThe reliability of each modality can be calculated by the mostappropriate and best in performance as [39]

119878119898 =1

119873 minus 1

119873

sum

119894=1

(max119895

log119875( 119874120582119895) minus log119875(119874

120582119894)) (5)

which means the average difference between the maximumlog-likelihood and the other ones are used to determine thereliability of each modality119873 is the number of classes beingconsidered to measure the reliability of each modality 119898 isin

119871 119877Then the integration weight of left iris reliability measure

120574119871 can be calculated by [40]

120574119871 =119878119871

119878119871 + 119878119877

(6)

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Computational Intelligence and Neuroscience 7

Reliability measurement

of left iris

Integrated pair of irisreliability measures

Reliability measurement of right iris

Left iris based

DHMM classifier

Right iris based

DHMM classifier

Baseline reliability ratio based

integration for likelihood ratio

score fusion

Iris recognition

Log P(OL120582iL)

Log P(OR120582iR)

SL

120574L

120574R

SR

Figure 10 Process of likelihood ratio score fusion based pair of irisrecognition systems

where 119878119871 and 119878119877 are the reliability measure of the outputs ofthe left iris and right iris DHMMs respectively

The integration weight of right iris modality measure canbe found as

120574119877 = (1 minus 120574119871) (7)

The results after applying score fusion approach for irisrecognition system are shown in Figure 11 The results showthe comparison among unimodal left iris unimodal right irisand left-right iris score fusion based multimodal recognitionsystem Here the score fusion approach achieves higherrecognition rate than any individual unimodal system of leftand right iris

7 Multiple Classifier Fusion forthe Proposed System

An effective way to combine multiple classifiers is requiredwhen a set of classifiers outputs are created Various archi-tectures and schemes have been proposed for combiningmultiple classifiers [41] The majority vote [42ndash45] is themost popular approach Other voting schemes include themaximum minimum median nash [46] average [47] andproduct [48] schemes Other approaches to combine clas-sifiers include the rank-based methods such as the Bordacount [49] the Bayes approach [44 45] the Dempster-Shafer theory [45] the fuzzy integral [50] fuzzy connectives[51] fuzzy templates [52] probabilistic schemes [53] andcombination by neural networks [54] Majority averagemaximum and nash voting techniques [41 47] have beenused to find out the most efficient voting technique forcombining four classifiers output in this work

In majority voting technique the correct class is the onemost often chosen by different classifiers If all the classifiersindicate different classes or in the case of a tie then the onewith the highest overall output is selected to be the correct

Left iris based unimodal systemRight iris based unimodal system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Left-right iris likelihood ratio score fusion based multimodal system

Figure 11 Results of comparison among unimodal left iris uni-modal right iris and multimodal left-right iris recognition system

class For maximum voting technique the class with thehighest overall output is selected as the correct class

119876 (119909) =119870argmax119894=1

119910119894 (119909) (8)

where 119870 is the number of classifiers and 119910119894(119909) represents theoutput of the 119894th classifier for the input vector 119909

Averaging voting technique averages the individual clas-sifier outputs confidence for each class across all of theensemble The class output yielding the highest average valueis chosen to be the correct class

119876 (119909) =119873argmax119895=1

(1

119870

119870

sum

119894=1

119910119894119895 (119909)) (9)

where 119873 is the number of classes and 119910119894119895(119909) represents theoutput confidence of the 119894th classifier for the 119895th class for theinput 119909

In nash voting technique each voter assigns a numberbetween zero and one for each candidate and then comparesthe product of the voterrsquos values for all the candidates Thehighest is the winner

119876 (119909) =119873argmax119895=1

119870

prod

119894=1

119910119894119895 (10)

Different types of voting techniques that is average votemaximum vote nash vote and majority vote have beenapplied to measure the accuracy of the proposed systemFigure 12 shows the comparison results of different typesof voting techniques Though the recognition rates are veryclose to each other of the voting techniques majority votingtechnique gives the highest recognition rate of the proposedsystem

The results of each unimodal system of iris left-rightiris feature fusion based multimodal system likelihood ratiobased score fusion basedmultimodal system andMCS based

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

8 Computational Intelligence and Neuroscience

Average voteMaximum vote

Nash voteMajority vote

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Figure 12 Results of different types of voting techniques for theproposed system

Left iris based unimodal systemRight iris based unimodal systemLeft-right iris feature fusion based multimodal systemLeft-right iris likelihood ratio score fusion based

MCS based majority voting system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

multimodal system

Figure 13 Performance comparison among each unimodal multi-modal and MCS based majority voting technique for iris recogni-tion

majority voting technique are compared which is shown inFigure 13 From the result it has been shown that MCS basedmajority voting technique achieves higher performance thanany other existing approach of iris recognition system

8 Performance Analysis ofthe Proposed System

The proposed approach of feature and score fusion basedMultiple Classifier Selection (MCS) performance has beencompared with existing hamming distance score fusionapproach proposed by Ma et al [24] log-likelihood ratioscore fusion approach proposed by Schmid et al [26] andfeature fusion approach proposed by Hollingsworth et al[27] Figure 14 shows the results where the proposed systemhas achieved the highest recognition rate over all of the

Hamming distance score fusion approach proposed by

Log-likelihood ratio score fusion approach proposed by

Feature fusion approach proposed by Hollingsworth et al (2009)Proposed feature fusion based MCS iris recognition system

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

False

acce

ptan

ce ra

te (

)

False rejection rate ()

Schmid et al (2006)

Ma et al (2004)

Figure 14 Performance comparison between proposed and differ-ent existing approaches of multimodal iris recognition

above-mentioned existing iris recognition system Hammingdistance score fusion approach proposed by Ma et al hasbeen rebuilt formeasuring the performance comparisonwiththe proposed approach From the ROC curve it is shownthat existing feature fusion approach of Hollingsworth et al[27] approach gives higher recognition result compared withhamming distance score fusion approach of Ma et al [24]and log-likelihood ratio score fusion approach of Schmid etal [26] Finally the proposed feature and decision fusionbased MCS system performs all over the existing multi-modal such as any feature fusion and any score fusionapproach The reason is that the existing approaches appliedonly either feature fusion or score fusion technique Butin this proposed approach feature fusion and score fusiontechniques are combined with individual left iris and rightiris recognition technique These four different classifiersoutput (ie unimodal left iris recognition classifier unimodalright iris recognition classifier left-right iris feature fusionbased classifier and left-right iris likelihood ratio scorefusion based classifier) is combined using Multiple ClassifierSelection (MCS) through majority voting technique Sincefour classifiers are used as the input for the majority votingtechnique there is a chance for a tie In that case since left-right iris feature fusion basedmultimodal system can achievehigher performance than any other unimodal and likelihoodratio score fusion based multimodal system which is shownin Figure 12 the output of left-right iris feature fusion basedmultimodal system output has been taken to break the tieTwo unimodal systems are used here because if one unimodalsystem fails to recognize then the other unimodal systemretains the accurate output as an associate with each otherSince the feature set contains richer information about theraw biometric data than the final decision integration atfeature level fusion is expected to provide better recognitionresults As a result left and right iris feature fusion haveapplied to improve the performance of the proposed system

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Computational Intelligence and Neuroscience 9

In feature fusion the features for both left and right irismodalities are integrated with equal weights but decisionof different classifiers can be fused with different weightsaccording to the noise level of left and right iris Likelihoodratio score fusion based iris recognition system has beenapplied in this proposal to combine the classifier outputnonequally When these four different classifiers outputsare combined with MCS based majority voting techniquethe proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognitionrate than other existing approaches of Ma et al Schmidet al and Hollingsworth et al proposed multimodal irisrecognition

The time required to fuse the output of the classifiers isdirectly proportional to the number of modalities used forthe proposed system Since four different classifiers are usedin this system the learning and testing time will increaseTheexecution time of the proposed system is increased enoughwith the uses of the number of classifiers for majority votingmethod Reduced processing time of the proposed systemmight be the further work of this system such that theproposed system can work like real-time environment andlarge population supported applications

9 Conclusions and Observations

Experimental results show the superiority of the proposedmultimodal feature and score fusion based MCS system overexistingmultimodal iris recognition systems proposed byMaet al Schmid et al and Hollingsworth et al Though CASIA-IrisV4 dataset has been used for measuring the performanceof the proposed system the database has some limitationsCASIA iris database does not contain specular reflections dueto the use of near infrared light for illumination Howeversome other iris databases such as LEI UBIRIS and ICEcontain the iris imageswith specular reflections and fewnoisefactors which are caused by imaging under different naturallighting environmentsThe proposed system can be tested onthe above-mentioned databases to measure the performancewith natural lighting conditions at various noise levels Sincethe proposed system can work for the offline environmentthe execution time can be reduced so that it can work for real-time applications

Conflict of Interests

The author declares that there is no conflict of interestsregarding the publication of this paper

References

[1] A K Jain R M Bolle and S Pankanti BIOMETRICS PersonalIdentification in Networked Society Kluwer Academic PressBoston Mass USA 1999

[2] W-S Chen and S-Y Yuan An Automatic Iris Recognition Sys-tem Based on Fractal Dimension VIPCC Laboratory Depart-ment of Electrical Engineering National Chi Nan University2006

[3] S Devireddy S Kumar G Ramaswamy et al ldquoA novel approachfor an accurate human identification through iris recognitionusing bitplane slicing and normalisationrdquo Journal of Theoreticaland Applied Information Technology vol 5 no 5 pp 531ndash5372009

[4] J Daugman ldquoBiometric Personal Identification System Basedon Iris Analysisrdquo United States Patent number 5291560 1994

[5] O Al-Allaf A AL-Allaf A A Tamimi and S A AbdAlKaderldquoArtificial neural networks for iris recognition system compar-isons between different models architectures and algorithmsrdquoInternational Journal of Information and Communication Tech-nology Research vol 2 no 10 2012

[6] R H Abiyev and K Altunkaya ldquoPersonal Iris recognitionusing neural networkrdquo International Journal of Security and itsApplications vol 2 no 2 pp 41ndash50 2008

[7] R Brunelli and D Falavigna ldquoPerson identification usingmultiple cuesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 17 no 10 Article ID 505417 pp 955ndash171995

[8] A Jaina K Nandakumara and A Rossb ldquoScore normalizationin multimodal biometric systemsrdquo Pattern Recognition vol 38no 12 pp 2270ndash2285 2005

[9] P K Atrey M A Hossain A El Saddik andM S KankanhallildquoMultimodal fusion for multimedia analysis a surveyrdquo Multi-media Systems vol 16 no 6 pp 345ndash379 2010

[10] L Rabiner and BH Juang Fundamentals of Speech RecognitionPrentice Hall Englewood Cliffs NJ USA 1993

[11] K W Bowyer K P Hollingsworth and P J Flynn ldquoA surveyof iris biometrics research 2008ndash2010rdquo in Handbook of IrisRecognition M Burge and K W Bowyer Eds pp 15ndash54Springer 2012

[12] Z Wang Q Han X Niu and C Busch ldquoFeature-level fusion ofiris and face for personal identificationrdquo in Advances in NeuralNetworksmdashISNN 2009 vol 5553 pp 356ndash364 Lecture Notes inComputer Science 2009

[13] Z F Wang Q Han Q Li X M Niu and C Busch ldquoComplexcommon vector for multimodal biometric recognitionrdquo Elec-tronics Letters vol 45 no 10 pp 495ndash497 2009

[14] Z Wang Q Li X Niu and C Busch ldquoMultimodal biometricrecognition based on complex KFDArdquo in Proceedings of the 5thInternational Conference on Information Assurance and Security(IAS rsquo09) pp 177ndash180 Xirsquoan China September 2009

[15] A Rattani and M Tistarelli ldquoRobust multi-modal and multi-unit feature level fusion of face and iris biometricsrdquo LectureNotes in Computer Science vol 5558 pp 960ndash969 2009

[16] A Jagadeesan TThillaikkarasi and K Duraiswamy ldquoProtectedbio-cryptography key invention from multimodal modalitiesfeature level fusion of fingerprint and Irisrdquo European Journal ofScientific Research vol 49 no 4 pp 484ndash502 2011

[17] U Gawande M Zaveri and A Kapur ldquoA novel algorithm forfeature level fusion using SVM classifier for multibiometrics-based person identificationrdquoApplied Computational Intelligenceand Soft Computing vol 2013 Article ID 515918 11 pages 2013

[18] F Wang and J Han ldquoMultimodal biometric authenticationbased on score level fusion using support vector machinerdquoOpto-Electronics Review vol 17 no 1 pp 59ndash64 2009

[19] NMorizet and J Gilles ldquoA new adaptive combination approachto score level fusion for face and iris biometrics combining

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

10 Computational Intelligence and Neuroscience

wavelets and statistical momentsrdquo Advances in Visual Comput-ing Lecture Notes in Computer Science vol 5359 no 2 pp 661ndash671 2008

[20] A Baig A Bouridane F Kurugollu and G Qu ldquoFingerprint-iris fusion based identification system using a single Hammingdistancerdquo in Proceedings of the Symposium on Bio-inspiredLearning and Intelligent Systems for Security pp 9ndash12 August2009

[21] J Wang Y Li X Ao C Wang and J Zhou ldquoMulti-modalbiometric authentication fusing iris and palmprint based onGMMrdquo in Proceedings of the 15th IEEESP Workshop on Sta-tistical Signal Processing (SSP 09) pp 349ndash352 Cardiff WalesAugust-September 2009

[22] M Vatsa R Singh A Noore and S K Singh ldquoBelief functiontheory based biometric match score fusion Case studies inmulti-instance and multi-unit iris verificationrdquo in Proceedingsof the 7th International Conference on Advances in PatternRecognition pp 433ndash436 February 2009

[23] K Hollingsworth K W Bowyer and P J Flynn ldquoImage aver-aging for improved iris recognitionrdquo in Advances in Biometricsvol 5558 of Lecture Notes in Computer Science pp 1112ndash11212009

[24] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[25] E Krichen L Allano S Garcia-Salicetti and B DorizzildquoSpecific texture analysis for iris recognitionrdquo in Proceedingsof the 5th International Conference on Audio- and Video-BasedBiometric Person Authentication (AVBPA rsquo05) pp 23ndash30 HiltonRye Town NY USA July 2005

[26] N A Schmid M V Ketkar H Singh and B Cukic ldquoPerfor-mance analysis of iris-based identification system at the match-ing score levelrdquo IEEE Transactions on Information Forensics andSecurity vol 1 no 2 pp 154ndash168 2006

[27] K Hollingsworth T Peters K W Bowyer and P J FlynnldquoIris recognition using signal-level fusion of frames fromvideordquoIEEE Transactions on Information Forensics and Security vol 4no 4 pp 837ndash848 2009

[28] LMa YWang andT Tan ldquoIris recognition using circular sym-metric filterrdquo in Proceedings of the 16th International Conferenceon Pattern Recognition vol 2 pp 414ndash417 2002

[29] R P Wildes J C Asmuth G L Green et al ldquoSystem forautomated iris recognitionrdquo in Proceedings of the 2nd IEEEWorkshop on Applications of Computer Vision pp 121ndash128Sarasota Fla USA December 1994

[30] L Masek Recognition of human iris patterns for biometricidentification [M S Thesis dissertation] School of ComputerScience and Software Engineering The University of WesternAustralia 2003

[31] S Sanderson and J Erbetta ldquoAuthentication for secure environ-ments based on iris scanning technologyrdquo in Proceedings of theIEE Colloquium on Visual Biometrics pp 81ndash87 London UK2000

[32] M Hwang and X Huang ldquoShared-distribution hidden Markovmodels for speech recognitionrdquo IEEE Transactions on Speechand Audio Processing vol 1 no 4 pp 414ndash420 1993

[33] L R Rabiner ldquoTutorial on hiddenMarkov models and selectedapplications in speech recognitionrdquo Proceedings of the IEEE vol77 no 2 pp 257ndash286 1989

[34] WKhreich EGranger AMiri andR Sabourin ldquoOn themem-ory complexity of the forward-backward algorithmrdquo PatternRecognition Letters vol 31 no 2 pp 91ndash99 2010

[35] ldquoCASIS Iris Image Database Version 40mdashBiometrics Ideal Test(BIT)rdquo httpwwwidealtestorg

[36] D Maltoni Biometric Fusion Handbook of Fingerprint Recog-nition Springer 2nd edition 2009

[37] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 no 1 pp 71ndash86 1991

[38] A Rogozan and P S Sathidevi ldquoStatic and dynamic featuresfor improved HMM based visual speech recognitionrdquo in Pro-ceedings of the 1st International Conference on Intelligent HumanComputer Interaction pp 184ndash194 Allahabad India 2009

[39] J S Lee and C H Park ldquoAdaptive decision fusion for audio-visual speech recognitionrdquo in Speech Recognition Technologiesand Applications F Mihelic and J Zibert Eds p 550 ViennaAustralia 2008

[40] W Feng L Xie J Zeng and Z Liu ldquoAudio-visual human recog-nition using semi-supervised spectral learning and hiddenMarkov modelsrdquo Journal of Visual Languages and Computingvol 20 no 3 pp 188ndash195 2009

[41] N Wanas Feature based architecture for decision fusion [PhDThesis dissertation] Department of Systems Design Engineer-ing University of Waterloo Ontario Canada 2003

[42] R Battiti and A M Colla ldquoDemocracy in neural nets votingschemes for classificationrdquo Neural Networks vol 7 no 4 pp691ndash707 1994

[43] C Ji and S Ma ldquoCombinations of weak classifiersrdquo IEEETransactions on Neural Networks vol 8 no 1 pp 32ndash42 1997

[44] L Lam and C Y Suen ldquoOptimal combinations of patternclassifiersrdquo Pattern Recognition Letters vol 16 no 9 pp 945ndash954 1995

[45] L Xu A Krzyzak and C Y Suen ldquoMethods of combiningmultiple classifiers and their applications to handwriting recog-nitionrdquo IEEETransactions on SystemsMan andCybernetics vol22 no 3 pp 418ndash435 1992

[46] L I Kuncheva ldquoA theoretical study on six classifier fusionstrategiesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 24 no 2 pp 281ndash286 2002

[47] P Munro and B Parmanto ldquoCompetition among networksimproves committee performancerdquo in Advances in NeuralInformation Processing Systems M C Mozer M I Jordan andT Petsche Eds vol 9 pp 592ndash598TheMIT Press CambridgeMass USA 1997

[48] D M J Tax M van Breukelen and R P W Duin ldquoCombiningmultiple classifiers by averaging or by multiplyingrdquo PatternRecognition vol 33 no 9 pp 1475ndash1485 2000

[49] T K Ho J J Hull and S N Srihari ldquoDecision combinationin multiple classifier systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 16 no 1 pp 66ndash75 1994

[50] S Cho and J H Kim ldquoCombining multiple neural networksby fuzzy integral for robust classificationrdquo IEEE Transactions onSystems Man and Cybernetics vol 25 no 2 pp 380ndash384 1995

[51] L Kuncheva ldquoAn application of owa operators to the aggre-gation of multiple classification decisionsrdquo in The OrderedWeighted Averaging Operators Theory and Applications RYager and J Kacprzyk Eds pp 330ndash343 Kluwer AcademicDordrecht The Netherlands 1997

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 11: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Computational Intelligence and Neuroscience 11

[52] L Kuncheva J C Bezdek and M A Sutton ldquoOn combiningmultiple classifiers by fuzzy templatesrdquo in Proceedings of theAnnual Meeting of the North American Fuzzy InformationProcessing Society (NAFIPS rsquo98) pp 193ndash197 IEEE PensacolaBeach Fla USA August 1998

[53] J Kittler M Hatef R P W Duin and J Matas ldquoOn combiningclassifiersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 20 no 3 pp 226ndash239 1998

[54] M Ceccarelli and A Petrosino ldquoMulti-feature adaptive classi-fiers for SAR image segmentationrdquoNeurocomputing vol 14 no4 pp 345ndash363 1997

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 12: Research Article Feature and Score Fusion Based Multiple Classifier Selection for Iris ...downloads.hindawi.com/journals/cin/2014/380585.pdf · 2019-07-31 · ers output, that is,

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014