a frequency based approch
TRANSCRIPT
-
8/8/2019 A Frequency Based Approch
1/12
384 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010
A Frequency-based Approach for Features Fusionin Fingerprint and Iris Multimodal Biometric
Identification SystemsVincenzo Conti, Carmelo Militello, Filippo Sorbello, Member, IEEE, and Salvatore Vitabile, Member, IEEE
AbstractThe basic aim of a biometric identification system isto discriminate automatically between subjects in a reliable anddependable way, according to a specific-target application. Mul-timodal biometric identification systems aim to fuse two or morephysical or behavioral traits to provide optimal False AcceptanceRate (FAR) and False Rejection Rate (FRR), thus improving sys-tem accuracy and dependability. In this paper, an innovative multi-modal biometricidentification system based on iris and fingerprinttraits is proposed. The paper is a state-of-the-art advancementof multibiometrics, offering an innovative perspective on features
fusion. In greater detail, a frequency-based approach results ina homogeneous biometric vector, integrating iris and fingerprintdata. Successively, a hamming-distance-based matching algorithmdeals with the unified homogenous biometric vector. The proposedmultimodal system achieves interesting results with several com-monly used databases. For example, we have obtained an inter-esting working point with FAR = 0% and FRR = 5.71% usingthe entire fingerprint verification competition (FVC) 2002 DB2Bdatabase and a randomly extracted same-size subset of the BATHdatabase. At the same time, considering the BATH database andthe FVC2002 DB2A database, we have obtained a further interest-ing working point with FAR= 0% and FRR= 7.28% 9.7%.
IndexTermsFusion techniques, identification systems, iris andfingerprint biometry, multimodal biometric systems.
I. INTRODUCTION
IN AN ACTUAL technological scenario, where Information
and Communication Technologies (ICT) provide advanced
services, large-scale and heterogeneous computer systems need
strong procedures to protect data and resources access from
unauthorized users. Authentication procedures, based on the
simple usernamepassword approach, are insufficient to provide
a suitable security level for theapplications requiringa high level
of protection for data and services.
Biometric-based authentication systems represent a valid al-
ternative to conventional approaches. Traditionally biometric
Manuscript received May 29, 2009; revised November 20, 2009; acceptedFebruary 7, 2010. Date of publication April 22, 2010; date of current versionJune 16, 2010. This paper was recommended by Associate Editor E. R. Weippl.
V. Conti, C. Militello, and F. Sorbello are with the Department of Com-puter Engineering, University of Palermo, Palermo 90128, Italy (e-mail:[email protected]; [email protected]; [email protected]).
S. Vitabile is with the Department of Biopathology, Medical and Foren-sic Biotechnologies, University of Palermo, Palermo 90127, Italy (e-mail:[email protected]).
Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSMCC.2010.2045374
systems, operating on a single biometric feature, have many
limitations, which are as follows [1].
1) Trouble with data sensors: Captured sensor data are often
affected by noise due to the environmental conditions (in-
sufficient light, powder, etc.) or due to user physiological
and physical conditions (cold, cut fingers, etc).
2) Distinctiveness ability: Not all biometric features have
the same distinctiveness degree (for example, hand-
geometry-based biometric systems are less selective thanthe fingerprint-based ones).
3) Lack of universality: All biometric features are universal,
but due to the wide variety and complexity of the human
body, not everyone is endowed with the same physical
features and might not contain all the biometric features,
which a system might allow.
Multimodal biometric systems are a recent approach devel-
oped to overcome these problems. These systems demonstrate
significant improvements over unimodal biometric systems, in
terms of higher accuracy and high resistance to spoofing.
There is a sizeable amount of literature that details differ-
ent approaches for multimodal biometric systems, which have
been proposed [1][4]. Multibiometrics data can be combined atdifferent levels: fusion at data-sensor level, fusion at the feature-
extraction level, fusion at the matching level, and fusion at the
decision level. As pointed out in [5], features-level fusion is eas-
ier to apply when the original characteristics are homogeneous
because, in this way, a single resultant feature vector can be
calculated. On the other hand, feature-level fusion is difficult to
achieve because: 1) the relationship between the feature spaces
could not be known; 2) the feature set of multiple modalities
may be incompatible; and 3) the computational cost to process
the resultant vector is too high.
In this paper, a template-level fusion algorithm resulting in a
unified biometric descriptor and integrating fingerprint and irisfeatures is presented. Considering a limited number of meaning-
ful descriptors for fingerprint and iris images, a frequency-based
codifying approach results in a homogenous vector composed
of fingerprint and iris information. Successively, the Hamming
Distance (HD) between two vectors is used to obtain its simi-
larity degree. To evaluate and compare the effectiveness of the
proposed approach, different tests on the official fingerprint veri-
fication competition (FVC) 2002 DB2 fingerprint database [30]
and the University of Bath Iris Image Database (BATH) iris
database [31] have been performed. In greater details, the test
conducted on the FVC2002 DB2B database and a subset of the
BATH database (ten users) have resulted in False Acceptance
1094-6977/$26.00 2010 IEEE
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
2/12
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 385
Rate (FAR) = 0% and a False Rejection Rate (FRR) = 5.71%,while the tests conducted on the FVC2002 DB2A database and
the BATH database (50 users) have resulted in an FAR = 0%and an FRR = 7.28% 9.7%.
Thepaper is organized as follows. Section II presents the main
related works.Section III illustrates themain techniques for mul-
timodal biometric authentication systems. Section IV describes
the proposed multimodal authentication system. Section V
shows the achieved experimental results. Section VI deals with
the comparison of the state-of-the-art solutions. Finally, some
conclusions and future works are reported in Section VII.
II. RELATED WORKS
A variety of articles can be found, which propose different ap-
proaches for unimodal and multimodal biometric systems. Tra-
ditionally, unimodal biometric systems have many limitations.
Multimodal biometric systems are based on different biometric
features and/or introduce different fusion algorithms for these
features. Many researchers have demonstrated that the fusion
process is effective, because fused scores provide much betterdiscrimination than individual scores. Such results have been
achieved using a variety of fusion techniques (see Section III
for further details). In what follows, the most meaningful works
of the aforementioned fields are described.
An unimodal fingerprint verification and classification system
is presented in [7]. The system is based on a feedback path for
the feature-extraction stage, followed by a feature-refinement
stage to improve the matching performance. This improvement
is illustrated in the contest of a minutiae-based fingerprint ver-
ification system. The Gabor filter is applied to the input image
to improve its quality.
Ratha et al. [9] proposed a unimodal distortion-tolerant fin-gerprint authentication technique based on graph representa-
tion. Using the fingerprint minutiae features, a weighted graph
of minutiae is constructed for both the query fingerprint and the
reference fingerprint. The proposed algorithm has been tested
with excellent results on a large private database with the use of
an optical biometric sensor.
Concerning iris recognition systems in [10], the Gabor fil-
ter and 2-D wavelet filter are used for feature extraction. This
method is invariant to translation and rotation and is tolerant
to illumination. The classification rate on using the Gabor is
98.3% and the accuracy with wavelet is 82.51% on the Institute
of Automation of the Chinese Academy of Sciences (CASIA)
database.
In the approach proposed in [11], multichannel and Gabor
filters have been used to capture local texture information of the
iris, which are used to construct a fixed-length feature vector.
The results obtained were FAR = 0.01% and FRR = 2.17% onCASIA database.
Generally, unimodal biometric recognition systems present
different drawbacks due its dependency on the unique bio-
metric feature. For example, feature distinctiveness, feature
acquisition, processing errors, and features that are temporally
unavailable can all affect system accuracy. A multimodal bio-
metric system should overcome the aforementioned limits by
integrating two or more biometric features.
Conti et al. [12] proposed a multimodal biometric sys-
tem using two different fingerprint acquisitions. The matching
module integrates fuzzy-logic methods for matching-score fu-
sion. Experimental trials using both decision-level fusion and
matching-score-level fusion were performed. Experimental re-
sults have shown an improvement of 6.7% using the matching-
score-level fusion rather than a monomodal authentication
system.
Yang and Ma [2] used fingerprint, palm print, and hand ge-
ometry to implement personal identity verification. Unlike other
multimodal biometric systems, these three biometric features
can be taken from the same image. They implemented matching-
score fusion at different levels to establish identity, performing
a first fusion of the fingerprint and palm-print features, and suc-
cessively, a matching-score fusion between the multimodal sys-
tem and the palm-geometry unimodal system. The system was
tested on a database containing the features self-constructed by
98 subjects.
Besbes et al. [13] proposed a multimodal biometric system
using fingerprint and iris features. They use a hybrid approachbased on: 1) fingerprint minutiae extraction and 2) iris tem-
plate encoding through a mathematical representation of the
extracted iris region. This approach is based on two recognition
modalities and every part provides its own decision. The final
decision is taken by considering the unimodal decision through
an AND operator. No experimental results have been reported
for recognition performance.
Aguilar et al. [14] proposed a multibiometric method using
a combination of fast Fourier transform (FFT) and Gabor fil-
ters to enhance fingerprint imaging. Successively, a novel stage
for recognition using local features and statistical parameters is
used. The proposed system uses the fingerprints of both thumbs.Each fingerprint is separately processed; successively, the uni-
modal results are compared in order to give the final fused
result. The tests have been performed on a fingerprint database
composed of 50 subjects obtaining FAR = 0.2% and FRR =1.4%.
Subbarayudu and Prasad [15] presented experimental re-
sults of the unimodal iris system, unimodal palmprint system,
and multibiometric system (iris and palmprint). The system
fusion utilizes a matching scores feature in which each sys-
tem provides a matching score indicating the similarity of the
feature vector with the template vector. The experiment was
conducted on the Hong Kong Polytechnic University Palm-
print database. A total of 600 images are collected from 100different subjects.
In contrast to the approaches found in literature and detailed
earlier, the proposed approach introduces an innovative idea
to unify and homogenize the final biometric descriptor using
two different strong featuresthe fingerprint and the iris. In
opposition to [2], this paper shows the improvements introduced
by adopting the fusion process at the template level as well as
the related comparisons against the unimodal elements and the
classical matching-score fusion-based multimodal system. In
addition, the system proposed in this paper has been tested on
the official fingerprint FVC2002 DB2 and iris BATH databases
[30], [31].
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
3/12
386 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010
III. MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEMS
Fusion strategies can be divided into two main categories:
premapping fusion (before the matching phase) and postmap-
ping fusion (after the matching phase). The first strategy deals
with the feature-vector fusion level [17]. Usually, these tech-
niques are not used because they result in many implementation
problems [1]. The second strategy is realized through fusion atthe decision level, based on some algorithms, which combine
single decisions for each component of the system. Further-
more, the second strategy is also based on the matching-score
level, which combines the matching scores of each component
system.
The biometric data can be combined at several different levels
of the identification process. Input can be fused in the following
levels [1], [5].
1) Data-sensor level: Data coming from different sensors
can be combined, so that the resulting information are in
some sense better than they would be possible when these
sources were individually used. The term better in that
case can mean more accurate, more complete, or more
dependable.
2) Feature-extraction level: The information extracted from
sensors of different modalities is stored in vectors on the
basis of their modality. These feature vectors arethen com-
bined to create a joint feature vector, which is the basis for
the matching and recognition process. One of the poten-
tial problems in this strategy is that, in some cases, a very
high-dimensional feature vector results from the fusion
process. In addition, it is hard to generate homogeneous
feature vectors from different biometrics in order to use a
unified matching algorithm.
3) Matching-score level: This is based on the combinationof matching scores, after separate feature extraction and
comparison between stored data and test data for each sub-
system have been compiled. Starting from the matching
scores or distance measures of each subsystem, an over-
all matching score is generated using linear or nonlinear
weighting.
4) Decision level: With this approach, each biometric sub-
system completes autonomously the processes of feature
extraction, matching, and recognition. Decision strategies
are usually of Boolean functions, where the recognition
yieldsthe majority decision among all present subsystems.
Fusion at template level is very difficult to obtain, since bio-metric features have different structures and distinctiveness. In
this paper, we introduce a frequency approach based on Log-
Gabor filter [18], to generate a unified homogeneous template
for fingerprint and iris features. In greater detail, the proposed
approach performs fingerprint matching using the segmented
regions (Regions Of Interests, ROIs) surrounding fingerprint
singularity points. On the other hand, iris preprocessing aims to
detect the circular region surrounding the iris. As a result, we
adopted a Log-Gabor-algorithm-based codifier to encode both
fingerprint and iris features, obtaining a unified template. Suc-
cessively, the HD on the fused template has been used for the
similarity index computation.
Fig. 1. General schema of the proposed multimodal system.
IV. PROPOSED MULTIMODAL BIOMETRIC SYSTEM
In this paper, a multimodal biometric system, based on fin-
gerprint and iris characteristics, is proposed. As shown in Fig. 1,
the proposed multimodal biometric system is composed of two
main stages: thepreprocessing stage andmatchingstage. Irisand
fingerprint images are preprocessed to extract the ROIs, based
on singularity regions, surrounding some meaningful points.
Despite to the classic minutiae-based approach, the fingerprint-
singularity-regions-based approach requires a low executiontime, since image analysis is based on a few points (core and
delta) rather than 3050 minutiae. Iris image preprocessing is
performed by segmenting the iris region from eye and delet-
ing the eyelids and eyelashes. The extracted ROIs are used as
input for the matching stage. They are normalized, and then,
processed through a frequency-based approach, in order to gen-
erate a homogeneous template. A matching algorithm is based
on the HD to find the similarity degree. In what follows, each
phase is briefly described.
A. Preprocessing Stage
An ROI is a selected part of a sample or an image used toperform particular tasks. In what follows, the fingerprint singu-
larity regions extraction process and the iris region extraction
process are described.
1) Fingerprint Singularity Region Extraction: Singularity
regions are particular fingerprint zones surrounding singularity
points, namely the core and the delta. Several approaches
for singularity-point detection had been proposed in litera-
ture. They can be broadly categorized into techniques based on
1) the Poincare index; 2) heuristics; 3) irregularity or curvature
operators; and 4) template matching.
By far, the most popular method has been proposed by
Kawagoe and Tojo [19]. The method is based on the Poincare
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
4/12
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 387
index since it assumes that the core, double core, and delta gen-
erate a Poincare index equal to 180, 360, and 180, respec-
tively. Fingerprint singularity region extraction is composed of
three main blocks: directional image extraction, Poincare index
calculation, and singularity-point detection.
Singularity points are not included in fingerprint images when
either the acquired image is only a partial image, or it is an arch
fingerprint. In these cases, singularity points cannot be detected,
so that the whole process will fail. For the aforementioned rea-
sons, a new technique, showing good accuracy rates and low
computational cost, is introduced to detect pseudosingularity
points.
a) Core and delta extraction: Singularity-point detection
is performed by checking the Poincare indexes associated with
the fingerprint direction matrix. As pointed out before, the sin-
gularity points with a Poincare index equal to 180, 180,
360 are associated with the core, the delta, and the double core,
respectively. In greater detail, the directional image extraction
phase is composed of the following four sequential tasks:
1) Gx and Gy gradients calculation using Sobel operators;2) Dx and Dy derivatives calculation;
3) (i, j) angle calculation of the (i, j) block;
4) Gaussian smoothing filter application on angle matrix.
Finally, the singularity points are detected, accordingly to the
Poincare indexes.
b) Pseudosingularity-point detection and fingerprint clas-
sification: The extraction step described in the previous section
may be affected by several drawbacks to the fingerprint acquisi-
tion process. In addition, arch fingerprints have no core and no
delta points, so that the previous process will give no points as
output. Generally, fingerprint images do not contain the same
number of singularity points. A whorl class fingerprint has twocore and two delta points, a left-loop or a right-loop fingerprint
has one core and one delta, and a tented arch fingerprint has
only a core point, while an arch fingerprint has no singularity
points.
A directional map is used to identify the real class of the
processed fingerprint image. Let us call as the angle between
a directional segment and the horizontal axis. Fingerprint topo-
logical structure shows that the coredelta path follows only
high angular variation points in a vertical direction. For this
reason, each > /2 will be set to , so that the directional
map is mapped in the range [/2, /2]. The new mapping
makes possible to highlight the points with an angular variation
closed to /2 in the directional map [see the white curve inFig. 2(b)].
Accordingly to (1), for each kernel (i, j), the differences com-
puted among each directional map element (i,j) and its 8_neigh-
bors are used to detect the zones with the highest vertical differ-
ences. Finally, according to (2), the point having the maximum
angular difference is selected
differencek (i, j)=angle(i, j)k neighbor(i, j),
k = 1, . . . , 8 (1)
max difference angle(i, j) = max(differencek (i, j)). (2)
Fig. 2. Classification of a partially acquired image. (a) Original fingerprintimage withthe overlappingline between thecore andpseudodeltapoint.(b)Mapof highest differences wherethe whiteline directionidentifiespseudodeltapoint.c) Midpoint Md , the core, and pseudodelta-pointsegment with the orthogonal
LR segment.
In unbroken and well-acquired images, high value points
identify a path between the core and delta points.
In a partially acquired left-loop, right-loop, and tented arch
fingerprint, this value identifies a path between the extractedsingularity point and the missing point. Fig. 2(a) shows an ex-
ample of a partial left-loop fingerprint, in which there is only one
core point (highlighted by a green circle). Fig. 2(b) represents
the matrix of the higher angle differences. The white line starts
from the core point and goes toward the missing delta point.
The proposed algorithm follows this line and approximates a
pseudodelta point (highlighted by a red triangle), which will
be used for image classification.
Fingerprint classification is performed considering the mu-
tual position between the core and pseudosingularity point. As
shown in Fig. 2(c), the directional map analysis refers to three
points, identified by the core and pseudodelta line midpoint Md .The midpoint is used to set the L point and the R point, on the
orthogonal line, in such a way that Md is the midpoint of the LR
segment.
The mutual positions between core, pseudodelta point, Md ,
L, and R identify fingerprint class applying the following rules.
1) Left-loop class:
abs(core delta angleR angle)
< abs(core delta angle L angle)
abs(core delta angleMd angle) > tolerance angle).
2) Right-loop class:
abs(core delta angleR angle)
> abs(core delta angleL angle)
abs(core delta angleMd angle) > tolerance angle).
3) Tented arch class:
abs(core delta angleR angle)
< abs(core delta angleL angle)
abs(core delta angleMd angle) < tolerance angle).
where core_delta_angle is the angle between the core and
pseudodelta-point segment and the horizontal axis, R_angle
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
5/12
388 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010
Fig. 3. For arch fingerprints, the pseudosingularity pointrefers to the maxi-mum curvature points. (a) Arch fingerprint. (b) 2-D view of the curvature map.
Fig. 4. Iris ROIextraction scheme: boundary localization, pupil segmentation,iris segmentation, and eyelids and eyelashes erosion.
is the angle of the R point in the directional map, L_angle is
the angle of the L point in the directional map, and Md angle
is the angle of the Md point in the directional map.
Whorl fingerprint topology is characterized by two closed and
centered core points, so that the double core detection selects
the fingerprint class.
Arch fingerprints have no singularity points. In this case,
the proposed approach detects the maximum curvature point,
namely pseudocore point, and the neighbor pixels, i.e., the
needed ROI (see Fig. 3).
2) Iris ROIs Extraction: As shown in Fig. 4, iris ROI seg-
mentation process is composed of four tasks: boundary local-ization, pupil segmentation, iris segmentation, and eyelid and
eyelash erosion.
a) Boundary localization: The boundaries in an iris image
are extracted by means of edge-detection techniques to compute
the parameters of the iris and pupil neighbors.
The approach aims to detect the circumference center and
radius of the iris and pupil region, even if the circumferences
areusually notconcentric [20]. Finally, theeyelids andeyelashes
form are located.
b) Pupil segmentation: The pupil-identification phase is
composed of twosteps. Thefirst step is an adaptive thresholding,
and the second step is a morphological opening operation [21].
Fig. 5. Pupil segmentation. Thresholding application and the morphologicalopening operation.
The first step is able to identify the pupil, but it cannot elim-
inate the presence of noise due to the acquisition phase. The
second step is based on a morphological opening operation per-
formed using a structural element of circular shape. As shown in
Fig. 5, the step ends when the morphological opening operation
reduces the pupil area to approximate the structural element.Successively, the pupil radius and center are identified. The
identification algorithm is executed in two steps: the first step
detects connected circular areas and almost connected circular
areas,tryingto getthe better pair (radius, center) with respect the
previous phase. The second step, starting from a square around
the coordinates of the obtained centroid, measures the maxi-
mum gradient variations along the circumferences centered in
the identified points with a different radius.
c) Iris segmentation: The iris boundary is detected in two
steps. Image-intensity information is converted into a binary
edge map. Successively, the set of edge points is subjected to
voting to instantiate the contour of the parametric values.
The edge-map is recovered using the Canny algorithm for
edge detection [22]. This operation is based on the thresholding
of the magnitude of the image-intensity gradient. In order to
incorporate directional tuning, image-intensity derivatives are
weighted to amplify certain ranges of orientation. For example,
in order to recognize this boundary contour, the derivatives are
weighted to be selective for vertical edges. Then, a voting pro-
cedure of Canny operator extracted points is applied to erase the
disconnected points along the edge. In greater detail, the Hough
transform [23] is defined, as in (3), for the circular boundary
and a set of recovered edge points xj , yj (with j = 1, . . ., n)
H(xc , yc , r) =
nj = 1
h (xj , yj , xc , yc , r) (3)
where
h (xj , yj , xc , yc , r) =
1, if g (xj , yj , xc , yc , r) = 0
0, otherwise
(4)
with
g (xj , yj , xc , yc , r) = (xj xc )2 + (yj yc )
2 r2 . (5)
For each edge point (xj , yj ), g(xj , yj , xc , yc , r) = 0 for every
parameter (xc , yc , r) that represents a circle through that point.
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
6/12
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 389
Fig. 6. Examples of the segmentation process. (a) Original BATH databaseiris image. (b) Image with pupil and iris circumference, eyelid, and eyelashpoints. (c) Extracted iris ROI after the segmentation.
The triplet maximizing Hcorresponds to the largest number of
edge points that represents the contour of interest.
d) Eyelid and eyelash erosion: Eyelids and eyelashes are
considered to be noise, and therefore, are seen to degrade
the system performance. Eyelashes are of two types: separable
eyelashes and multiple eyelashes. The eyelashes present in our
dataset belong to the separable type. Initially, the eyelids are
isolated by fitting a line to the upper and lower eyelid using the
linear Hough transform. Successively, the Canny algorithm is
used to create the edge map, and only the horizontal gradient
information is considered.
As shown in Fig. 6, the real part of the Gabor filter (1-D
Gabor filter) in the spatial domain has been used [24] for eyelash
location. The convolution between the separable eyelash andthe real part of the Gabor filter is very small. For this reason,
if a resultant point is smaller than an empirically predefined
threshold, the point belongs to an eyelash. Each point in the
eyelash must be connected to another eyelash point or to an
eyelid. If at any point, one of the two previous criterions is
fulfilled, its neighbor pixels are required to check whether or
not they belong to an eyelash or eyelid. If none of the neighbor
pixels has been classified as a point in an eyelid or in eyelashes,
it is not consider as a pixel in an eyelash.
B. Matching Algorithm
Fusion is performed by combining the biometric templateextracted from every pair of fingerprints and irises representing
a user. First, the identifiers extracted from the original images
are stored in different feature vectors. Successively, each vector
is normalized in polar coordinates. Then, they are combined.
Finally, HD is used for matching score computation. In what
follows, the applied techniques for ROI normalization, template
generation, and HD will be described.
1) ROI Normalization: Since the fingerprint and iris im-
ages of different people may have different sizes, a normal-
ization operation must be performed after ROIs extraction. For
a given person, biometric feature size may vary because of illu-
mination changes during the iris-acquisition phase or pressure
Fig. 7. (a) Polar coordinate system for an iris ROI and the correspondinglinearized visualization. (b) Two examples showing the linearized iris and fin-gerprint ROI images, respectively.
variation during the fingerprint-acquisition phase. Equalizing
the histogram, ROIs show a uniform level of brightness.The coordinate transformation process produces a 448 96
biometric pattern for each meaning ROI: 448 is the number
of the chosen radial samples (to avoid data loss in the round
angle), while 96 pixels are the highest difference between iris
and pupil radius in the iris images, or the ROI radius in the
fingerprint images. In order to achieve invariance with regards
to roto-translation and scaling distortion, the rpolar coordinate
is normalized in the [0, 1] range. Fig. 7(a) depicts the polar
coordinate system for an iris ROI and the corresponding lin-
earized visualization. In Fig. 7(b), two examples of iris and
fingerprint ROI images are depicted. For each Cartesian point
of the ROI, image is assigned a polar coordinates pair (r, ),with r [R1 , R2 ] and [0, 2], where R1 is the pupil radius
and R2 is the iris radius. For fingerprint ROIs, R1 = 0.Since iris eyelashes and eyelids generate some corrupted ar-
eas, a noise mask corresponding to the aforementioned cor-
rupted areas is associated with each linearized ROI. In addition,
the phase component will be meaningless in the regions where
the amplitude is zero, so that these regions are also added to
the noise mask. Fig. 8 depicts the biometric template and the
relative noise masks. In this example, the noise mask associated
with a fingerprint singularity region [see Fig. 8(c)] is completely
black because the region is inside the fingerprint image and no
noise is considered to be present. On the contrary, if ROIs are
partially discovered, the noise mask will contain the white zones
(no information), as shown in the Fig. 8(a).
2) Homogenous Template Generation: The homogenous
biometric vector from fingerprint and iris data is composed of bi-
nary sequences representing the unimodal biometric templates.
The resulting vector is composed of a header and a biomet-
ric pattern. The biometric pattern is composed of two subpat-
terns as well. The first pattern is related to the extracted finger-
print singularity points, reporting the codified and normalized
ROIs.
The second pattern is related to the extracted iris code, re-
porting the codified and normalized ROIs. In greater detail, the
1-B header contains the following information.
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
7/12
390 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010
Fig. 8. (a) and (c) Noise masks used to extract useful information from theiris and fingerprint descriptors. The black area is the useful area used to performthe matching process. The white areas are related to the noisy zones and con-sequently they are discarded in the matching process. The iris and fingerprintdescriptors are reported in (b) and (d), respectively.
1) Core number (2 bit): indicates the number of core points
in the fingerprint image (0 if no ROI has been extracted
around the core).2) Delta number (2 bit): indicates the number of delta points
in the fingerprint image (0 if no ROI has been extracted
around the delta).
3) Fingerprint class (3 bit): indicates the membership to
which of the five fingerprint classes.
4) Iris ROI extraction (1 bit): 0 if the segmentation step has
failed.
Normalized fingerprint and iris ROIs are codified using
the Log-Gabor approach [18]. Different from magnitude-based
methods [38], Gabor filters are tool used to provide local fre-
quency information [25], [37]. However, the standard Gabor
filter has two limitations: its bandwidth and the limited infor-
mation extraction on a broad spectrum. An alternative Gaborfilter is the Log-Gabor filter. This filter can be designed with
arbitrary bandwidth, and it represents a Gabor filter constructed
as a Gaussian on a logarithmic scale. The frequency response
of this filter is defined by the following equation:
G(f) = exp
log2 (f/f0 )
2log2 (/f0 )
(6)
where f0 is the central frequency and is the filter bandwidth.
In our approach, the implementation of the Log-Gabor filter
proposed by Masek [26] has been considered. As depicted in
Fig. 9, each row of the normalized pattern is considered as a
1-D signal, processed by a convolution operation using the 1-DLog-Gabor filter.
The phase component, obtained from the 1-D Log-Gabor
filter real and imaginary parts, is then quantized in four levels,
using the Daugman method [27], [28]. Therefore, each filter
generates a 2-bits coding for each iris/fingerprint ROI pixel.
The phase-quantization coding is performed through the Gray
code, so that only 1 bit changes when moving from one quadrant
to the next one. This will minimize the number of differing bits
when two patterns are slightly misaligned [26].
The different coded biometric patterns are then concatenated,
thus obtaining a 3-D biometric pattern, where each element is
represented by a voxel (see Fig. 10).
Fig. 9. Codified fingerprint or iris ROIs obtained applying the four levelsquantized in the 1-D Log-Gabor filter.
Fig. 10. 3-D biometric pattern obtained by iris coding, fingerprint core regioncoding, and fingerprint delta region coding. The associated template header will
address the meaning voxels in the 3-D template.
3) HD-Based Matching: The matching score is calculated
through the HD calculation between two final fused templates.
The template obtained in the encoding process will need a cor-
responding matching metric that provides a measure of the sim-
ilarity degree between the two templates. The result of the mea-
sure is then compared with an experimental threshold to decide
whether or not the two representations belong to the same user.
The metric used in this paper is also used by Daugman [27], [28]
in his recognition system.
If two patterns X and Y have to be compared, the HD is
defined as the sum of discordant bits in homologous position(XOR operation between Xand Ybits). In other words
HD =1
N
Nj = 1
XOR(Xj , Yj ) (7)
where Nis the total number of bits.
If two patterns are completely independent, the HD between
them should be equal to 0.5, since independence implies that
the two strings of bits are completely random so that 0.5 is the
ability to set every bit to 1 and vice versa. If the two patterns of
the same biometric descriptor are processed, then, their distance
should be zero.
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
8/12
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 391
TABLE ICOMPOSITION AND DETAILS OF THE USED FINGERPRINT AND IRIS DATABASES
(IN ITALIC THE REDUCED DATABASES)
The algorithm used in this paper uses a mask to identify the
useful area for the matching process. Therefore, the HD is cal-
culated based only on the significant bits of the two templates.The modified and used formula is (8), where Xj and Yj are the
correspondent bits to compare, Xnj and Ynj are the correspond-
ing bits in the noisy mask, and N is the number of the bits to
represent every template
HD =1
NN
k = 1 OR(Xn k , Yn k )
Nj =1
AND(XOR(Xj , Yj ),Xnj , Ynj ). (8)
As suggested by Daugman [27], [28], to avoid false results
caused by the rotation problem, a template is shifted to theright and left with respect to the corresponding template, and
for each shift operation, the new HD is then calculated. Each
shift corresponds to a rotation of 2. Among all obtained values,
the minimum distance is considered (corresponding to the best
matching between the two templates).
V. EXPERIMENTAL RESULTS
The proposed multimodal biometric authentication system
achieves interesting results on standard and commonly used
databases. To show the effectiveness of our approach, the
FVC2002 DB2 database [30] and the BATH database [31] have
been used for fingerprints and irises, respectively. The obtainedexperimental results, in terms of recognition rates and execution
times, are here outlined. The listed FAR and FRR indexes have
been calculated following the FVC guidelines [30]. Table I gives
a brief description of the features of the used databases.
The reduced BATH-S1 database has been generated with
ten user random extractions from the full iris database. For
each user, the first eight iris acquisitions have been selected.
The BATH-S2 database has been generated considering the 50
database users. For each user, the first eight iris acquisitions have
been selected. The BATH-S3 database has been generated con-
sidering the 50 database users as well. For each user, a second
pattern of eight iris acquisitions (916) has been selected. The
TABLE IIRECOGNITION RATES OF THE UNIMODAL BIOMETRIC SYSTEMS
FOR THE ENTIRE DATABASES
TABLE IIITEST SETS COMPOSITION AND FEATURES
FVC2002 DB2A-S1 database has been generated considering
the first 50 users, while the FVC2002 DB2A-S2 database has
been generated considering the last 50 users.
A. Recognition Analysis of the Multimodal System
The multimodal recognition system performance evaluation
has been performed using the well-known FRR and FAR in-dexes. For an authentication system, the FAR is the number of
times that an incorrectly accepted unauthorized access occurred,
while the FRR is the number of times that an incorrectly rejected
authorized access resulted.
To evaluate and compare the performance of the proposed
approach, several tests have been conducted. The first test has
been conducted on the full FVC2002 DB2A database using a
classical unimodal minutiae-based recognition system [7], [8].
This approach hasresulted in FAR = 0.38% andFRR = 14.29%.The performance of the fingerprint unimodal recognition sys-
tem using the previously described frequency-based approach
on the same full FVC2002 DB2A database has also been eval-
uated.This approach has resulted in FAR = 1.37% and FRR =22.45%. Table II shows the achieved results using two methods.
In Table II, the result achieved by the iris unimodal recog-
nition system using the previously described frequency-based
approach on the full BATH database [28] is also reported.
Successively, several test sets considering the appropriate
number of fingerprint and iris acquisitions have been gener-
ated to test the proposed multimodal approach. Table III shows
the used test sets composition.
An initial test has been conducted on the DBtest1 dataset
using a classical fusion approach at the matching-score level
and utilizing an Euclidean metric applied to the HD of each
subsystem. With this approach, the following results have been
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
9/12
392 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010
TABLE IVRECOGNITION RATES OF THE PROPOSED TEMPLATE-LEVEL FUSION
ALGORITHM COMPARED TO UNIMODAL SYSTEMSFOR REDUCED DATABASES (TEN USERS)
Fig. 11. ROC curves for the unimodal biometric systems and the correspond-ing multimodal system with the DBtest1 dataset.
obtained: FAR = 0.07% and FRR = 11.78%. The score has been
obtained weighting the matching scores (0.65 for iris and 0.35for fingerprints) of each unimodal biometric system. The afore-
mentioned weight pair is the better tradeoff in order to meet the
following constraints: 1) literature approaches show that iris-
based systems achieve higher recognition accuracies than the
fingerprint-based ones [7], [9], [27], [28] and 2) our experimen-
tal trials confirm that the aforementioned weights optimize the
recognition accuracy performance.
Successively, the proposed fusion strategy, at the template
level, has been applied and tested on the same dataset (DBtest1)
obtaining the results listed in Table IV. The correspondent mul-
timodal authentication system uses the previously described ho-
mogeneous biometric vector, and it does not require any weight
for iris and fingerprint unimodal systems to evaluate the match-ing score. With this approach, the following results have been
obtained: FAR = 0% and FRR = 5.71%. Fig. 11 shows thereceiver-operating characteristic (ROC) curves for the systems
reported in Table IV. ROC curves are obtained by plotting the
FAR index versus the FRR index, with different values of the
matching threshold.
Finally, following the items listed in Table III, the remaining
four datasets have been considered to further evaluate the pro-
posedtemplate-level fusionstrategy. Table V shows the achieved
results in terms of FAR and FRR indexes. The results achieved
by the two unimodal recognition systems on the same pertinent
databases are also reported in Table V.
TABLE VRECOGNITION RATES OF THE PROPOSED TEMPLATE-LEVEL FUSION
ALGORITHM COMPARED TO UNIMODAL SYSTEMSFOR REDUCED DATABASES (50 USERS)
Fig. 12. ROC curves for the unimodal biometric systems and the correspond-ing multimodal system with the DBtest4 dataset.
As shown in Table V, the conducted tests produce comparable
results on the used datasets, underlying the presented approach
robustness. Fig. 12 shows the ROC curves for the systems deal-
ing with the DBtest4 dataset. Analogous curves have been ob-
tained with the remaining datasets.
B. Execution Time Analysis of the Multimodal Software System
The multimodal systems have been implemented usingthe MATLAB environment on a general-purpose Intel P4 at
3.00GHz processor with 2-GB RAM memory. Table VI shows
the average software execution times for the preprocessing
and matching tasks. The fingerprint preprocessing time can
change, since it depends either on singularity-point detection,
pseudosingularity-pointdetection, or maximum curvature point
detection.
VI. DISCUSSIONS AND COMPARISONS
Multimodal biometric identification systems aim to fuse two
or more physical or behavioral pieces of information to provide
optimal FAR and FRR indexes improving system dependability.
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
10/12
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 393
TABLE VISOFTWARE EXECUTION TIMES FOR THE PREPROCESSING
AND MATCHING TASKS
In contrast to the majority of work published on this topic
that has been based on matching-score-level fusion or decision-
level fusion, this paper presents a template-level fusion method
for a multimodal biometric system based on fingerprints and
irises. In greater detail, the proposed approach performs finger-
print matching using the segmented regions (ROIs) surrounding
fingerprint singularity points. On the other hand, iris prepro-
cessing aims to detect the circular region surrounding the iris.
To achieve these results, we adopted a Log-Gabor-algorithm-
based codifier to encode both fingerprint and iris features, thusobtaining a unified template. Successively, the HD on the fused
template was used for the similarity index computation. The
improvements are now described and discussed, which were in-
troduced by adopting the fusion process at the template level
as well as the related comparison against the unimodal bio-
metric systems and the classical matching-score-fusion-based
multimodal systems. The proposed approach for fingerprint and
iris segmentation, coding, and matching has been tested as uni-
modal identification systems using the official FVC2002 DB2A
fingerprint database and the BATH iris database. Even if, the
frequency-based approach, using fingerprint (pseudo) singular-
ity point information, introduces an error on system recognitionaccuracy (see Table II), the achieved recognition results have
shown an interesting performance if compared with the litera-
ture approaches on similar datasets. On the other hand, in the
frequency-based approach, it is very difficult to use the classi-
cal minutiae information, due to its great number. In this case,
the frequency-based approach should consider a high number
of ROIs, resulting in the whole fingerprint image coding, and
consequently, in high-dimensional feature vector.
Shi et al. [32] proposed a novel fingerprint-matching method
based on the Hough transform. They tested the method us-
ing the FVC2002 DB2A database, depicting two ROC curves
with FAR and FRR indexes comparable to our results. Nagar
et al. [33] used minutiae descriptors to capture orientation andridge frequency information in a minutias neighbor. They vali-
dated their results on the FVC2002 DB2A database, showing a
working point with FAR = 0.7% at a genuine accept rate (GAR)of 95%. However, they did not use the complete database, but
only two samples for each user; therefore, they considered only
200 images. In [34], Yang et al. proposed a novel helper data
based on the topo-structure to reduce the alignment calculation
amount. They tested their approach with FVC2002 DB2A ob-
taining an FAR between 0% and 0.02% with a GAR between
88% or 92%, changing particular thresholds.
Concerning the iris identification system, the achieved per-
formance can be considered very interesting when compared
with the results of different approaches found in literature on
the same dataset or similar dataset. A novel technique for iris
recognition using texture and phase features is proposed in [35].
Texture features are extracted on the normalized iris strip using
Haar Wavelet, while phase features are obtained using Log-
Gabor Wavelet. The matching scores generated from individual
modules are combined using the sum of their score technique.
The system is tested on the BATH database giving an accuracy
of 95.62%. The combined system at a matching-score-level fu-
sion increased the system performance with FAR = 0.36% andFRR = 8.38%.
In order to test the effectiveness of the proposed multimodal
approach, several datasets have been used. First, two differ-
ent multimodal systems have been tested and compared on the
standard FVC2002 DB2B fingerprint image database and the
BATH-S1 iris image database: the former was based on a
matching-score-level fusion technique, while the latter was
based on the proposed template-level fusion technique. The
obtained results show that the proposed template-level fusion
technique carries out an enhanced system showing interestingresults in terms of FAR and FRR (see Tables II and IV for further
details). The aforementioned result suggests that the template-
level fusion gives better performance than the matching-score-
level fusion. This statement confirms the results presented in
[36]. In this paper, Khalifa and Amara presented the results of
four different fusions of modalities at different levels for two
unimodal biometric verification systems, based on offline sig-
nature and handwriting. However, the better result was obtained
using a fusion strategy at the feature-extraction level. In con-
clusion, we can affirm that when a fusion strategy is performed
at the feature-extraction level, a homogeneous template is gen-
erated, so that a unified matching algorithm is used, at whichtime the corresponding multimodal identification system shows
better results when compared to the result achieved using other
fusion strategies.
Lastly, several 50-users databases have been generated, com-
bining the available FVC2002 DB2A fingerprint database and
the BATH iris database. The achieved results, reported in
Table V, show uniform performance on the used datasets.
In literature, few multimodal biometric systems based on
template-level fusion have been published, rendering it is
very difficult to comment and analyze the experimental re-
sults obtained in this paper. Besbes et al. [13] proposed a
multimodal biometric system using fingerprint and iris fea-
tures. They use a hybrid approach based on: 1) fingerprintminutiae extraction and 2) iris template encoding through a
mathematical representation of the extracted iris region. How-
ever, no experimental results have been reported in the pa-
per. As pointed out before, a mixed multimodal system based
on features fusion and matching-score fusion has been pro-
posed in [2]. The paper presents the overall result of the en-
tire system on self-constructed, proprietary databases. The pa-
per reports the ROC graph with the unimodal and the mul-
timodal system results. The ROC curves show the improve-
ments introduced by the adopted fusion strategy. No FAR and
FRR values are reported. Table VII summarizes the previous
results.
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
11/12
394 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010
TABLE VIICOMPARISON OF THE RECOGNITION RATES OF OUR APPROACH AND THE OTHER LITERATURE APPROACHES
VII. CONCLUSION AND FUTURE WORKS
For an ideal authentication system, FAR and FRR indexes
are equal to 0. The aforementioned result may be reached by
online biometric authentication systems, because they have the
freedom to reject the low-quality acquired items. On the con-
trary, official ready-to-use databases (FVC databases, CASIA,
BATH, etc.) contain images with different quality, including
low-, medium-, and high-quality biometric acquisitions, as well
as partial and corrupted images. For this reason, these biomet-
ric authentication systems do not achieve the ideal result. To
increase the related security level, system parameters are then
fixed in order to achieve the FAR = 0% point and a correspond-ing FRR point.
In this paper, a template-level fusion algorithm working on
a unified biometric descriptor was presented. The aforemen-
tioned result leads to a matching algorithm that is able to pro-
cess fingerprint-codified templates, iris-codified templates, and
iris and fingerprint-fused templates. In contrast to the classi-
cal minutiae-based approaches, the proposed system performs
fingerprint matching using the segmented regions (ROIs) sur-
rounding (pseudo) singularity points. This choice overcomes
the drawbacks related to the fingerprint minutiae information:
the frequency-based approach should consider a high numberof ROIs, resulting in the whole fingerprint image coding, and
consequently, in high-dimensional feature vector.
At the same time, irispreprocessing aims to detect the circular
region surrounding the feature, generating an iris ROI as well.
For best results, we adopted a Log-Gabor-algorithm-based cod-
ifier to encode both fingerprint and iris features, thus obtaining
a unified template. Successively, the HD on the fused template
has been used for the similarity index computation.
The multimodal biometric system has been tested on different
congruent datasets obtained by the official FVC2002 DB2 fin-
gerprint database [30] and the BATH iris database [31]. The first
test conducted on ten users has resulted in an FAR = 0% and
an FRR = 5.71%, while the tests conducted on the FVC2002DB2A and BATH databases resulted in an FAR = 0% and anFRR = 7.28% 9.7%.
Future works will be aimed to design and prototype an em-
bedded recognizer-integrating features acquisition and process-
ing in a smart device without biometric data transmission be-
tween the different components of a biometric authentication
system [6], [16].
REFERENCES
[1] A. Ross and A. Jain, Information fusion in biometrics, PatternRecogn. Lett., vol. 24, pp. 21152125, 2003. DOI: 10.1016/S01678655(03)00079-5.
[2] F. Yang and B. Ma, A new mixed-mode biometrics informa-tion fusion based-on fingerprint, hand-geometry and palm-print, inProc. 4th Int. IEEE Conf. Image Graph., 2007, pp. 689693. DOI:10.1109/ICIG.2007.39.
[3] J. Cui, J. P. Li, and X. J. Lu, Study on multi-biometric feature fusion andrecognition model, in Proc. Int. IEEEConf. Apperceiving Comput. Intell.
Anal. (ICACIA), 2008, pp. 6669. DOI: 10.1109/ICACIA.2008.4769972.[4] S. K. Dahel and Q. Xiao, Accuracy performance analysis of multimodal
biometrics, in Proc. IEEE Syst., Man Cybern. Soc., Inf. Assur. Workshop,2003, pp. 170173. DOI: 10.1109/SMCSIA.2003.1232417.
[5] A. Ross, K. Nandakumar, and A. K. Jain, Handbook of Multibiometrics.Berlin, Germany: Springer-Verlag. ISBN 9780-387-22296-7.
[6] UK Biometrics Working Group (BWG). Biometrics Security Con-cerns. (2009, Nov.). [Online]. Available: http://www.cesg.gov.uk/policy_technologies/biometrics/index.shtml, 2003.
[7] S. Prabhakar, A. K. Jain, and J. Wang, Minutiae verification and classifi-cation, presented at the Dept. Comput. Eng. Sci., Univ. Michigan State,East Lansing, MI, 1998.
[8] V. Conti, C. Militello, S. Vitabile, and F. Sorbello, A multimodal tech-nique for an embeddedfingerprintrecognizer in mobile payment systems,
Int. J. Mobile Inf. Syst., vol. 5, no. 2, pp. 105124, 2009.[9] N. K. Ratha, R. M. Bolle, V. D. Pandit, and V. Vaish, Robust fin-
gerprint authentication using local structural similarity, in Proc. 5th IEEE Workshop Appl. Comput. Vis., Dec. 46, 2000, pp. 2934. DOI10.1109/WACV.2000.895399.
[10] Y. Zhu, T. Tan, and Y. Wang, Biometric personal identification on irispatterns, in Proc. 15th Int. Conf. Pattern Recogn., 2000, vol. 2, pp. 805808.
[11] L. Ma,Y.Wang, andD. Zhang, Efficientiris recognition by characterizingkeylocal variations, IEEETrans. Image Process., vol. 13, no. 6,pp. 739
750, Jun. 2004.
Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
-
8/8/2019 A Frequency Based Approch
12/12
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 395
[12] V. Conti, G. Milici, P. Ribino, S. Vitabile, and F. Sorbello, Fuzzy fusionin multimodal biometric systems, in Proc. 11th LNAI Int. Conf. Knowl.-
Based Intell. Inf. Eng. Syst. (KES 2007/WIRN 2007), Part I LNAI 4692.B. Apolloni et al., Eds. Berlin, Germany: Springer-Verlag, 2010,pp. 108115.
[13] F. Besbes, H. Trichili, and B. Solaiman, Multimodal biometric systembased on fingerprint identification and Iris recognition, in Proc. 3rd Int.
IEEE Conf. Inf. Commun. Technol.: From Theory to Applications (ICTTA2008), pp. 15. DOI: 10.1109/ICTTA.2008.4530129.
[14] G. Aguilar, G. Sanchez, K. Toscano, M. Nakano, and H. Perez, Multi-modal biometric system using fingerprint, in Proc. Int. Conf. Intell. Adv.Syst. 2007, pp. 145150. DOI: 10.1109/ICIAS.2007.4658364.
[15] V. C. Subbarayudu and M. V. N. K. Prasad, Multimodal biometric sys-tem, in Proc. 1st Int. IEEE Conf. Emerging Trends Eng. Technol., 2008,pp. 635640. DOI 10.1109/ICETET.2008.93.
[16] P. Ambalakat. Security of biometric authentication systems. 21st Com-put. Sci. Semin. (SA1-T1-1). (2009, Nov.). [Online]. Available: http:
//www.rh.edu/rhb/cs_seminar_2005/SessionA1/ambalakat.pdf, 2005[17] A. Ross and R. Govindarajan, Feature level fusion using hand and face
biometrics, in Proc. SPIEConf. Biometric Technol. Human IdentificationII, Mar. 2005, vol. 5779, pp. 196204.
[18] D. J. Field, Relations between the statistics of natural images and theresponse profiles of cortical cells, J. Opt. Soc. Amer., vol. 4, pp. 23792394, 1987.
[19] M. Kawagoe and A. Tojo, Fingerprint pattern classification, Pat-
tern Recogn., vol. 17, no. 3, pp. 295303, 1984. DOI: 10.1016/0031-3203(84)90079-7.[20] M. L. Pospisil, The human Iris structure and its usages, Acta Univ.
Palacki Phisica, vol. 39, pp. 8795, 2000.[21] R. C. Gonzalez andR. E. Woods,Digital Image Processing. Englewood
Cliffs, NJ: Prentice-Hall, 2008.[22] J. Canny, A computational approach to edge detection, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679698, Nov. 1986.[23] P. V. C. Hough, Method and means for recognizing complex patterns,
U.S. Patent 3 069654, Dec. 18, 1962.[24] A. Kumar and G. K. H. Pang, Defect detection in textured materials
using gabor filters, IEEE Trans. Ind. Appl., vol. 38, no. 2, pp. 425440,Mar./Apr. 2002.
[25] L. Hong, Y. Wan, and A. Jain,Fingerprint image enhancement, algorithmand performance evaluation, IEEE Trans. Pattern Anal. Mach. Intell.,vol. 20, no. 8, pp. 777789, Aug. 1998.
[26] L. Masek. (2003). Recognition of human Iris patterns for bio-
metric identification, Masters thesis, Univ. Western Australia, Aus-tralia. (2009, Nov.). [Online]. Available: http://www.csse.uwa.edu.au/-pk/studentprojects/libor/, 2003
[27] J. Daugman, The importance of being random: Statistical principles ofiris recognition, Pattern Recogn., vol. 36, pp. 279291, 2003.
[28] J. Daugman, How iris recognition works, IEEE Trans. CircuitsSyst. Video Technol., vol. 14, no. 1, pp. 2130, Jan. 2004. DOI:10.1109/TCSVT.2003.818350.
[29] Celoxica Ltd. (2009, Nov.). [Online]. Available: http://agilityds.com/products/
[30] Fingerprint Verification Competition FVC2002. (2009, Nov.). [Online].Available: http://bias.csr.unibo.it/fvc2002/
[31] BATH Iris Database, University of Bath Iris Image Database.(2009, Nov.). [Online]. Available: http://www.bath.ac.uk/eleceng/research/sipg/irisweb/
[32] J. Q. Z. Shi, X. Zhao, and Y. Wang, A novel fingerprint matching method
based on the hough transform without quantization of the hough space,in Proc. 3rd Int. Conf. Image Graph. (ICIG 2004), pp. 262265. ISBN0-7695-2244-0.
[33] A. Nagar, K. Nandakumar, andA. K. Jain, Securingfingerprinttemplate:Fuzzy vault with minutiae descriptors, in Proc. 19th Int. Conf. Pattern
Recogn. (ICPR 2008), pp. 14. ISBN 978-1-4244-2174-9.[34] J. Li, X. Yang, J. Tian, P. Shi, and P. Li, Topological structure-based
alignment for fingerprint fuzzy vault, presented at the 19th Int. Conf.Pattern Recogn. (ICPR), Bejing, China. ISBN 978-1-4244-2174-9.
[35] H. Mehrotra, B. Majhi, and P. Gupta, Multi-algorithmic Iris authentica-tion system, presented at the World Acad. Sci., Eng. Technol., BuenosAires, Argentina, vol. 34. 2008. ISSN 2070-3740.
[36] A. B. Khalifa and N. E. B. Amara, Bimodal biometric verification withdifferentfusionlevels, in Proc. 6thInt. Multi-Conf. Syst., Signals Devices,2009, SSD 09, pp. 16. DOI: 10.1109/SSD.2009.4956731.
[37] Y. Guo, G. Zhao, J. Chen, M. Pietikainen, and Z. Xu, A new gabor phasedifference pattern for face and ear recognition, presented at the 13th Int.
Conf. Comput. Anal.Images Patterns, Munster, Germany, Sep. 24,2009.
[38] A. Ross, A. K. Jain, and J. Reisman, A hybrid fingerprint matcher,Pattern Recogn., vol. 36, no. 7, pp. 16611673, Jul. 2003.
Vincenzo Conti received the Laurea (summa cumlaude) and the Ph.D. degrees in computer engineer-ing from the University of Palermo, Palermo, Italy,in 2000 and 2005, respectively.
Currently, he is a Postdoc Fellow with the De-partment of Computer Engineering, University ofPalermo. His research interests include biometricrecognition systems, programmable architectures,user ownership in multi-agent systems, and bioin-formatics. In each of these research fields he has pro-duced manypublicationsin nationaland international
journalsand conferences. He has participated to several research projects fundedby industries and research institutes in his research areas.
Carmelo Militello received the Laurea (summa cumlaude) degree in computer engineering in 2006 fromthe University of Palermo, Palermo, Italy, with thefollowing thesis: An Embedded Device Based onFingerprints and SmartCard for Users Authentica-
tion. Study and Realization on Programmable Logi-cal Devices. From January 2007 to December 2009,he participated to Ph.D. student course in the Depart-ment of Computer Engineering (DINFO), Universityof Palermo.
He is currently a component of the InnovativeComputer Architectures (IN.C.A.) Group of the DINFO, coordinated the Prof.Filippo Sorbello. His research interests include embedded biometric systemsprototyped on reconfigurable architectures.
Filippo Sorbello (M91) received the Laureadegree in electronic engineering from the Universityof Palermo, Palermo, Italy, in 1970.
He is a Professor of computer engineering withthe Department of Computer Engineering (DINFO),University of Palermo, Palermo, Italy. He is a found-
ing member and served as the Department Head forthe first two terms. From 1995 to 2009, he served asthe Director of the Office for Information Tecnology(CUC) of the University of Palermo. His researchinterests include neural networks applications, real
time image processing, biometric authentication systems, multi-agent systemsecurity, and digital computer architectures. He has chaired and participated asmember of the program committee of several national and international confer-ences. He has coauthored more than 150 scientific publications.
Prof. Sorbello is a member of the IEEE Computer, the Association of theComputing Machinery(ACM), the Italian Association for Artificial Intelligence(AIIA), Italian Association for Computing (AICA), Italian Association of Elec-trical, Electronic, and Control and Computer Engineers (AEIT).
Salvatore Vitabile (M07) received the Laurea de-
gree in electronic engineering and the Dr. degree incomputer science from the University of Palermo,Palermo, Italy, in 1994 and 1999, respectively.
He is currently an Assistant Professor with theDepartment of Biopathology, Medical, and ForensicBiotechnologies (DIBIMEF), University of Palermo,Palermo, Italy. His research interests include neu-ral networks applications, biometric authenticationsystems, exotic architecture design and prototyping,real-time driver assistance systems, multi-agent sys-
tem security, and medical image processing. He has coauthored more than 100scientific papers in referred journals and conferences.
Dr. Vitabile has joined the Editorial Board of the International Journal of In- formation Technology and Communications and Convergence. He has chaired,organized, and served as member of the technical program committees of severalinternational conferences, symposia, and workshops. He is currently a memberof the Board of Directors of SIREN (Italian Society of Neural Networks) and
the IEEE Engineering in Medicine and Biology Society.