ieee icapr 2009
DESCRIPTION
Multisensor Biometric Evidence Fusion for Person Authentication using Wavelet Decomposition and Monotonic-Decreasing GraphTRANSCRIPT
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 11
““Multisensor Biometric Evidence Fusion for Multisensor Biometric Evidence Fusion for Person Authentication using Wavelet Person Authentication using Wavelet
Decomposition and Monotonic-Decreasing Graph”Decomposition and Monotonic-Decreasing Graph”
*D. R. Kisku, J. K. Sing, M. Tistarelli, P. Gupta*D. R. Kisku, J. K. Sing, M. Tistarelli, P. Gupta
*Department of Computer Science and Engineering,*Department of Computer Science and Engineering,
Dr. B. C. Roy Engineering College, Dr. B. C. Roy Engineering College,
Durgapur – 713206, IndiaDurgapur – 713206, [email protected]@ieee.org
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 22
Agenda of discussion:Agenda of discussion:
IntroductionIntroduction Multisensor biometric evidence fusion using Multisensor biometric evidence fusion using
wavelet decompositionwavelet decomposition SIFT features extractionSIFT features extraction Mono-tonic decreasing graphMono-tonic decreasing graph Experimental resultsExperimental results Concluding remarksConcluding remarks
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 33
Introduction:Introduction: This work presents a novel biometric sensor generated This work presents a novel biometric sensor generated
evidence fusion of face and palmprint images using wavelet evidence fusion of face and palmprint images using wavelet decomposition for personnel identity verification.decomposition for personnel identity verification.
The approach of biometric image fusion at sensor level refers The approach of biometric image fusion at sensor level refers to a process that fuses multi-pattern images captured at to a process that fuses multi-pattern images captured at different resolutions and by different biometric sensors to different resolutions and by different biometric sensors to acquire richer and complementary information to produce a acquire richer and complementary information to produce a new fused image in spatially enhanced form.new fused image in spatially enhanced form.
When the fused image is ready for further processing, SIFT When the fused image is ready for further processing, SIFT operator are then used for feature extraction and the operator are then used for feature extraction and the recognition is performed by adjustable structural graph recognition is performed by adjustable structural graph matching between a pair of fused images by searching matching between a pair of fused images by searching corresponding points using recursive descent tree traversal corresponding points using recursive descent tree traversal approach.approach.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 44
Multisensor biometric evidence fusion Multisensor biometric evidence fusion using wavelet decomposition:using wavelet decomposition:
Multisensor image fusion [1] refers to a process that fuses Multisensor image fusion [1] refers to a process that fuses images to generate a complete fused image at low level.images to generate a complete fused image at low level.
Fused image contains redundant and complementary Fused image contains redundant and complementary richer information.richer information.
Evidence fusion is based on the image decomposition [1] Evidence fusion is based on the image decomposition [1] into multiple-channel depending on their local frequency.into multiple-channel depending on their local frequency.
Decompose image into a number of new images, each of Decompose image into a number of new images, each of them having a different degree of resolution. them having a different degree of resolution.
According to Fourier transform, the wave representation is According to Fourier transform, the wave representation is an intermediate representation between Fourier and spatial an intermediate representation between Fourier and spatial representations. representations.
It has the capability to provide good localization for both It has the capability to provide good localization for both frequency and space domains.frequency and space domains.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 55
Contd…Contd…
Wavelet based image fusion [1] of face and palmprint Wavelet based image fusion [1] of face and palmprint images is shown in the following Figure.images is shown in the following Figure.
Face image
Palm image
Decomposition
Decomposition
Fusion of decompositions
Fused image
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 66
SIFT feature extraction:SIFT feature extraction: The scale invariant feature transform, called SIFT The scale invariant feature transform, called SIFT
descriptor, has proposed by David Lowe [2] and proved descriptor, has proposed by David Lowe [2] and proved to be invariant to image rotation, scaling, translation, to be invariant to image rotation, scaling, translation, partly illumination changes. partly illumination changes.
The investigation of SIFT features for biometrics has The investigation of SIFT features for biometrics has been explored in [3]-[4].been explored in [3]-[4].
SIFT feature points are detected with the following steps:SIFT feature points are detected with the following steps:
select candidates for feature points by searching peaks in select candidates for feature points by searching peaks in the scale-space from a difference of Gaussian (DoG) the scale-space from a difference of Gaussian (DoG) function,function,
localize the feature points by using the measurement of localize the feature points by using the measurement of their stability,their stability,
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 77
Contd…Contd…
assign orientations based on local image properties,assign orientations based on local image properties, calculate the feature descriptors which represent local calculate the feature descriptors which represent local
shape distortions and illumination changes.shape distortions and illumination changes.
50 100 150
50
100
150
200
Left image shows the fused image and on the right, SIFT feature extraction is shown from the fused image.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 88
Structural graph for matching and Structural graph for matching and verification:verification:
Monotonic-decreasing graph based [5] relation is Monotonic-decreasing graph based [5] relation is established between a pair of fused images.established between a pair of fused images.
A recursive tree traversal algorithm is used for searching A recursive tree traversal algorithm is used for searching the paired matching feature points.the paired matching feature points.
We choose a set of three points on a given fused gallery We choose a set of three points on a given fused gallery image, which are uniquely determined. image, which are uniquely determined.
Connecting these three points with each other and form a Connecting these three points with each other and form a triangle,triangle, and also three distances are computed.and also three distances are computed.
We try to locate another set of three points on a given fused We try to locate another set of three points on a given fused probe image that also form a triangle, which is the best probe image that also form a triangle, which is the best
matching the triangle computed on gallery image.matching the triangle computed on gallery image.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 99
Contd…Contd…
Best match would be possible when the edges of second Best match would be possible when the edges of second triangle will be matched of the edges of the first triangle triangle will be matched of the edges of the first triangle maintaining the following criterionmaintaining the following criterion
Traversal would be possible when one of the first vertices Traversal would be possible when one of the first vertices and the subsequent vertices of the second triangle may and the subsequent vertices of the second triangle may correspond to the first vertex and the subsequent vertices correspond to the first vertex and the subsequent vertices
of the first triangle andof the first triangle and conversely, may also possibleconversely, may also possible..
331
232
121
|),(),(|
|),(),(|
|),(),(|
ggdppd
ggdppd
ggdppd
ki
kj
ji
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1010
Contd…Contd… Traversal can be start from the first edge (pi, pj) and by
visiting n feature points, we can generate a matching graph on the fused probe image which should be a corresponding candidate graph of G.
At the end of traversal algorithm, a set of candidate graphs are found with each of identical number of feature points.
For illustration, consider with the minimal k-th order error from, the final optimal graph can be found from the set of candidate graphs and we can write,
iGPGP kik ,|'||"|
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1111
Contd…Contd… The The k-thk-th order error between the optimal graph and the order error between the optimal graph and the
gallery graph can be computed asgallery graph can be computed as
The above equation denotes the sum of all differences The above equation denotes the sum of all differences between a pair edges corresponding to a pair of graphs.between a pair edges corresponding to a pair of graphs.
For identity verification of a person, client-specific threshold For identity verification of a person, client-specific threshold has been determined heuristically for each user, and the has been determined heuristically for each user, and the final dissimilarity value is then compared with client-specific final dissimilarity value is then compared with client-specific
threshold and decision is madethreshold and decision is made
mkk
ggdppdGPik
jjiijii
m
ik
,...,3,2,1,
,|),()','(||''|)1,min(
12
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1212
Experimental results:Experimental results:
The experiment of the proposed method is carried out on The experiment of the proposed method is carried out on multimodal database containing the face and palmprint multimodal database containing the face and palmprint images of 150 individuals.images of 150 individuals.
The matching is accomplished for the proposed method The matching is accomplished for the proposed method and the results shows that fusion performance at the semi-and the results shows that fusion performance at the semi-sensor level / low level is found to be superior when it is sensor level / low level is found to be superior when it is compared with other two methods, namely, palmprint compared with other two methods, namely, palmprint verification and face recognition drawn on same feature verification and face recognition drawn on same feature space.space.
Multisensor biometric image fusion produces 98.19% Multisensor biometric image fusion produces 98.19% accuracy, while face recognition and palmprint recognition accuracy, while face recognition and palmprint recognition systems produce 89.04% accuracy and 92.17% accuracy, systems produce 89.04% accuracy and 92.17% accuracy, respectively, as shown in the Figure.respectively, as shown in the Figure.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1313
Contd…Contd…
10-4
10-3
10-2
10-1
100
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
<--- False Accept Rate --->
<--
- A
ccep
t R
ate
--->
Receiver Operating Characteristics (ROC) Curve
Multisensor Biometric fusion
Palmprint-SIFT MatchingFace-SIFT Matching
ROC curves (in ‘stairs’ form) for the different methods are shown.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1414
Concluding remarks:Concluding remarks:
A novel and efficient method of multisensor biometric A novel and efficient method of multisensor biometric image fusion of face and palmprint for personal image fusion of face and palmprint for personal authentication is proposed. authentication is proposed.
High-resolution face and palmprint images are fused using High-resolution face and palmprint images are fused using wavelet decomposition process and matching is performed wavelet decomposition process and matching is performed by monotonic-decreasing graph drawn on invariant SIFT by monotonic-decreasing graph drawn on invariant SIFT features. features.
The result shows that the proposed method initiated at the The result shows that the proposed method initiated at the low level / semi-sensor level is robust, computationally low level / semi-sensor level is robust, computationally efficient and less sensitive to unwanted noise confirming efficient and less sensitive to unwanted noise confirming the validity and efficacy of the system the validity and efficacy of the system
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1515
References:References:
1.1. T. Stathaki, T. Stathaki, “Image Fusion – Algorithms and Applications”,“Image Fusion – Algorithms and Applications”, Academic Academic Press, U.K., 2008. Press, U.K., 2008.
2.2. D. G. Lowe, D. G. Lowe, “Distinctive image features from scale invariant “Distinctive image features from scale invariant keypoints”,keypoints”, International Journal of Computer Vision, vol. 60, no. 2, International Journal of Computer Vision, vol. 60, no. 2, 2004. 2004.
3.3. U. Park, S. Pankanti and A. K. Jain, "U. Park, S. Pankanti and A. K. Jain, "Fingerprint Verification Using SIFT FeaturesFingerprint Verification Using SIFT Features", Proceedings of SPIE ", Proceedings of SPIE Defense and Security Symposium, Orlando, Florida, 2008. Defense and Security Symposium, Orlando, Florida, 2008.
4.4. M. Bicego, A. Lagorio, E. Grosso and M. Tistarelli, M. Bicego, A. Lagorio, E. Grosso and M. Tistarelli, “On the use of “On the use of SIFT features for face authentication”,SIFT features for face authentication”, Proc. of Int Workshop on Proc. of Int Workshop on Biometrics, in association with CVPR 2006.Biometrics, in association with CVPR 2006.
5.5. Z. C. Lin, H. Lee and T. S. Huang, Z. C. Lin, H. Lee and T. S. Huang, "Finding 3-D point "Finding 3-D point correspondences in motion estimation",correspondences in motion estimation", Proceeding of International Proceeding of International Conference on Pattern Recognition, Paris, France, pp.303 – 305, Conference on Pattern Recognition, Paris, France, pp.303 – 305, October, 1986.October, 1986.
Date: 4th - 6th February, 2009Date: 4th - 6th February, 2009 1616
Questions ? ? ?Questions ? ? ?