arxiv:1912.01408v1 [cs.cv] 3 dec 2019

7
Detecting Finger-Vein Presentation Attacks Using 3D Shape & Diffuse Reflectance Decomposition Jag Mohan Singh Sushma Venkatesh Kiran B. Raja Raghavendra Ramachandra Christoph Busch Norwegian Biometrics Laboratory, NTNU, Norway {jag.m.singh; sushma.venkatesh;kiran.raja}@ntnu.no {raghavendra.ramachandra; christoph.busch}@ntnu.no Abstract Despite the high biometric performance, finger-vein recognition systems are vulnerable to presentation attacks (aka., spoofing attacks). In this paper, we present a new and robust approach for detecting presentation attacks on finger-vein biometric systems exploiting the 3D Shape (normal-map) and material properties (diffuse-map) of the finger. Observing the normal-map and diffuse-map exhibit- ing enhanced textural differences in comparison with the original finger-vein image, especially in the presence of varying illumination intensity, we propose to employ tex- tural feature-descriptors on both of them independently. The features are subsequently used to compute a separat- ing hyper-plane using Support Vector Machine (SVM) clas- sifiers for the features computed from normal-maps and diffuse-maps independently. Given the scores from each classifier for normal-map and diffuse-map, we propose sum-rule based score level fusion to make detection of such presentation attack more robust. To this end, we construct a new database of finger-vein images acquired using a custom capture device with three inbuilt illuminations and validate the applicability of the proposed approach. The newly col- lected database consists of 936 images, which corresponds to 468 bona fide images and 468 artefact images. We es- tablish the superiority of the proposed approach by bench- marking it with classical textural feature-descriptor applied directly on finger-vein images. The proposed approach out- performs the classical approaches by providing the Attack Presentation Classification Error Rate (APCER) & Bona fide Presentation Classification Error Rate (BPCER) of 0% compared to comparable traditional methods. 1. Introduction The use of finger-vein as a type of a biometric character- istic for verification is growing in recent years, especially for banking transaction applications, where there is a need for highly secure access control [9]. The practical example of such systems can be found in commercial applications that were developed by Hitachi in 2004 with finger-vein verification mainly for use in unsupervised scenarios such as Bank ATMs [9]. Despite the high recognition accuracy of such finger-vein systems, they are prone to attacks at the capture level where one can use an artefact (e.g., printed finger-vein image) to gain illegitimate access, especially in unsupervised access control settings. Such attacks at the capture level are popularly termed as presentation attacks (aka., spoofing attacks), and attacks can be carried out us- ing different kind of artefacts, such as using a printed photo of a finger-vein representation (print-attack) or alternatively using an electronic display to show the vein representation (display-attack)[24]. Bona fide Finger-Vein Images captured at three different illuminations Figure 1: Bona fide finger-vein image captured in three dif- ferent illumination intensities. The vulnerability of finger-vein verification systems to presentation attacks was first shown by Nguyen et al. in 2013 [11]. Given the vulnerability finger-vein verification systems to presentation attacks, it is essential to increase their security by supplementing with robust attack detection mechanisms [1]. In the rest of the paper, we present the related work, and our contributions in Section 2, followed by the proposed approach in Section 3, followed by Finger-Vein capture de- vice, and database in Section 4, and experimental protocol & results in Section 5. In the end, conclusions & future- work are presented in Section 6. 1 arXiv:1912.01408v1 [cs.CV] 3 Dec 2019

Upload: others

Post on 14-Jan-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Detecting Finger-Vein Presentation Attacks Using 3D Shape & DiffuseReflectance Decomposition

Jag Mohan Singh Sushma Venkatesh Kiran B. RajaRaghavendra Ramachandra Christoph Busch

Norwegian Biometrics Laboratory, NTNU, Norway{jag.m.singh; sushma.venkatesh;kiran.raja}@ntnu.no{raghavendra.ramachandra; christoph.busch}@ntnu.no

Abstract

Despite the high biometric performance, finger-veinrecognition systems are vulnerable to presentation attacks(aka., spoofing attacks). In this paper, we present a newand robust approach for detecting presentation attackson finger-vein biometric systems exploiting the 3D Shape(normal-map) and material properties (diffuse-map) of thefinger. Observing the normal-map and diffuse-map exhibit-ing enhanced textural differences in comparison with theoriginal finger-vein image, especially in the presence ofvarying illumination intensity, we propose to employ tex-tural feature-descriptors on both of them independently.The features are subsequently used to compute a separat-ing hyper-plane using Support Vector Machine (SVM) clas-sifiers for the features computed from normal-maps anddiffuse-maps independently. Given the scores from eachclassifier for normal-map and diffuse-map, we proposesum-rule based score level fusion to make detection of suchpresentation attack more robust. To this end, we construct anew database of finger-vein images acquired using a customcapture device with three inbuilt illuminations and validatethe applicability of the proposed approach. The newly col-lected database consists of 936 images, which correspondsto 468 bona fide images and 468 artefact images. We es-tablish the superiority of the proposed approach by bench-marking it with classical textural feature-descriptor applieddirectly on finger-vein images. The proposed approach out-performs the classical approaches by providing the AttackPresentation Classification Error Rate (APCER) & Bonafide Presentation Classification Error Rate (BPCER) of 0%compared to comparable traditional methods.

1. Introduction

The use of finger-vein as a type of a biometric character-istic for verification is growing in recent years, especially

for banking transaction applications, where there is a needfor highly secure access control [9]. The practical exampleof such systems can be found in commercial applicationsthat were developed by Hitachi in 2004 with finger-veinverification mainly for use in unsupervised scenarios suchas Bank ATMs [9]. Despite the high recognition accuracyof such finger-vein systems, they are prone to attacks at thecapture level where one can use an artefact (e.g., printedfinger-vein image) to gain illegitimate access, especially inunsupervised access control settings. Such attacks at thecapture level are popularly termed as presentation attacks(aka., spoofing attacks), and attacks can be carried out us-ing different kind of artefacts, such as using a printed photoof a finger-vein representation (print-attack) or alternativelyusing an electronic display to show the vein representation(display-attack)[24].

Bona fide Finger-Vein Images captured at three different illuminations

Figure 1: Bona fide finger-vein image captured in three dif-ferent illumination intensities.

The vulnerability of finger-vein verification systems topresentation attacks was first shown by Nguyen et al. in2013 [11]. Given the vulnerability finger-vein verificationsystems to presentation attacks, it is essential to increasetheir security by supplementing with robust attack detectionmechanisms [1].

In the rest of the paper, we present the related work, andour contributions in Section 2, followed by the proposedapproach in Section 3, followed by Finger-Vein capture de-vice, and database in Section 4, and experimental protocol& results in Section 5. In the end, conclusions & future-work are presented in Section 6.

1

arX

iv:1

912.

0140

8v1

[cs

.CV

] 3

Dec

201

9

2. Related Work & Our Contributions

Nguyen et al. [11] proposed the first PAD systemfor finger-vein based on Fourier and Wavelet transforms.In 2015 the first finger-vein PAD competition was orga-nized [23], the participating teams proposed the follow-ing main feature extraction methods: Binarised StatisticalImage Features (BSIF [7]), and a combination of a set oflocal texture descriptors including Local Binary Patterns(LBP [13]), Local Phase Quantization (LPQ [14]), a patch-wise Short Time Fourier Transform (STFT), and a WeberLocal Descriptor(WLD) each of them were followed bySVM classifier which resulted in highly accurate perfor-mance. Further, in 2015, Raghavendra et al. [18] used aEulerian Video Magnification method to quantify the blood-flow for finger-vein PAD against print-attacks. They bench-marked their approach with state-of-the-art (SOTA) pre-sented in [23], achieving significant performance improve-ment over them. A new Presentation Attack Instrument(PAI) based on electronic display was later proposed byRaghavendra et al. [19], where steerable pyramids wereused for finger-vein PAD. Tirunagari et al. [22] presented anovel approach where they used Windowed Dynamic modedecomposition (W-DMD) on static images of finger-vein in-stead of image sequences as done previously in Dynamicmode decomposition. Kocher et al. [8] introduced LBP ex-tensions for finger-vein PAD and concluded that they per-formed the same as the ”LBP” baseline. In deep-learning-based approaches, Qiu et al. [17] designed a new convo-lutional neural network (CNN) called FPNet for finger-vein PAD, which achieved perfect accuracy over VERADB [25], which is a public database. Nguyen et al. [12]developed a PAD method for finger-vein recognition sys-tems based on capture from a near-infrared (NIR) camera.They extracted image features using a CNN followed byprincipal component analysis (PCA) for dimensionality re-

duction and a Support Vector Machine (SVM) as a classi-fier. Raghavendra et al. [4] presented the augmented pre-trained CNN models that have demonstrated outstandingperformance on both print and electronic display attack.Qiu et al. [16] used total variation decomposition to dividethe finger-vein sample into the structural and noise compo-nents to achieved a D-EER of 0% and used an image decom-position approach. Maser et al. [10] used Photo ResponseNon-Uniformity (PRNU) to detect PAD for finger-vein im-ages and demonstrated their results on two publicly avail-able datasets IDIAP [24] and SCUT-FVD [16]. Anjos etal. [2] have summarized the efforts in finger-vein presenta-tion attacks, and their detection.

2.1. Our Contributions

In summary, we note that the use of textural informa-tion in a classifier can be well used for PAD in finger-vein systems. We employ the custom-built finger-vein sen-sor that can capture the finger-vein characteristics underthree different illumination intensities. On the decomposednormal-map and diffuse-map of the finger-vein image, weextract the textural-descriptors in line with the previouslypublished works. Thus, with the given approach of decom-posed normal-map and diffuse-map, it is possible to employany of the existing finger-vein PAD approaches. The pro-posed approach is presented further in detail in Section 3.In our proposed approach, we also work on minimizing thepre-processing and manual effort required in finger-vein ex-traction algorithms. We mainly don’t have the finger-veinextraction step in our proposed approach, which involves alot of manual effort as the finger-vein extraction process re-quires the tuning of parameters. The key contributions ofthis work, therefore, can be summarized as:

• Presents a novel approach of using material properties(diffuse-map) and 3D Shape (normal-map) directly on

SfS-Net Normal-Map & Diffuse-MapFinger-Vein Images

Light Intensity 1Normal-Map 1

Diffuse-Map 1

Normal-Map 2

Diffuse-Map 2

Normal-Map 3

Diffuse-Map 3

Texture Descriptor & SVM

Sum-RuleFusion

Bonafide/

ArtefactLight Intensity 2

Light Intensity 3

Figure 2: Block Diagram of proposed approach for finger-vein presentation attack detection.

the captured finger-vein image and is fully backwardcompatible with previous feature descriptor based ap-proaches.

• Our approach does not require extensive manual ef-fort, and pre-processing, which is one of the maindrawbacks of existing methods. The manual effort isneeded in two stages for many existing approaches,wherein the first stage is the finger-vein to be visiblein the captured image. In the second stage, the finger-vein has to be extracted from the captured image. Ourapproach only requires the finger-vein to be partiallyvisible in the captured image, and the second stage offinger-vein extraction is not needed in our proposed ap-proach.

• Presents a baseline evaluation with feature descrip-tors applied directly on the finger-vein images, and thesame with feature descriptors post decomposition intodiffuse-map and normal-map.

• Presents an extensive evaluation of the proposed ap-proach on a newly collected database of 936 imagesfrom 78 unique subjects.

3. Proposed approach

In this section, we describe the proposed approach forrobust finger-vein PAD. The proposed approach is basedon computing 3D shape (normal-map) and material prop-erties (diffuse-map) from a single finger-vein image. Weassert that the 3D characteristics differ for bona fide andartefact finger-vein image samples. The key reason for thisdifference is attributed to the 3D shape (normal-map), andmaterial properties (diffuse-map) being different for natural(bona fide) finger-vein, and an artefact of the same finger-vein. This difference is further highlighted by varying theillumination intensity, which makes it suitable for PAD. Theidea of using illumination intensities is relatively new andwas pointed by authors in [5], where it was pointed out thatthe usually ignored factors of illumination intensity and ex-posure would play a role in capturing the 3D Shape from asemi-calibrated photometric stereo. Thus, we use varyingillumination intensities to capture the 3D Shape in multi-ple illumination intensities, as it is much easier to vary theillumination intensity of the compared to changing its di-rection. Figure 2 shows the block diagram of the proposedapproach that can be structured in four functional units thatare explained as below:

3.1. Finger-Vein Image Capture

As illustrated in Figure 1, we capture the finger-veinfrom the sensor using three illuminations with varying il-lumination intensities. The three illumination intensities

are chosen for each finger such that the best quality finger-vein image-based is obtained by the human operator forintensity-1 and subsequently, the images are obtained forillumination lower than intensity-1 and illumination higherthan intensity-1. We indicate these three images from threeintensities as I1, I2, and I3.

Bona fide Finger-Veins

 Artefact Finger-Veins

Input Image                                  Normal-Map                                 Diffuse-Map

Input Image                           Normal-Map                                 Diffuse-Map

Figure 3: Decomposition of bona fide, and artefact imagesinto normal-map, and diffuse-map.

3.2. Normal-Map & Diffuse-Map Decomposition

Given the three finger-vein images captured (I1, I2, andI3), we obtain normal-map (indicated by IN1 , IN2 , and IN3 )and diffuse-map (indicated by ID1 , ID2 , and ID3 ) for each ofthree finger-vein images for a subject. Assuming a Lamber-tian reflectance model of the underlying surface (either bonafide or artefact), we extract normal-map and diffuse-map us-ing SfS-Net [21]. It can be further noted that normal-mapand diffuse-map provide textural features that can be em-ployed for PAD1. The diffuse-map (IDk ) and normal-mapfor INk can easily be obtained using second-order spheri-cal harmonic lighting coefficients under the assumption ofLambertian reflectance model for k = 1, 2, 3 correspondingto three images of finger-vein images. We obtain shading-map (r(INk (p))) of the material for kth input finger-vein im-age (using notation from [3]) using Equation 1 where lnmare spherical harmonic lighting coefficients, and rnm arethe harmonic images. We then obtain the diffuse-map (IDk )using albedo-map (ρk), and shading-map obtained in Equa-tion 1 using Equation 2. The decomposition of the inputfinger-vein images into diffuse-maps, and normal-maps is

1Although SfS-Net [21] provides diffuse-map, normal-map, albedo-map, andshading-map, we notice usable features for PAD in diffuse-map and normal-mapalone.

Procedure 1 Normal-Map & Diffuse-Map Decomposition

Input: I1, I2, I3Output: ID1 , ID2 , ID3 , IN1 , IN2 , IN3

1: procedure COMPUTE NORMAL-MAP AND DIFFUSE-MAP(I1, I2, I3)

2: for k ← 1, 2, 3 do3: Compute Normal-Map (INk ) directly from SfS-

Net.4: Compute Diffuse-map (IDk ) for Ik by Equa-

tion 1, and Equation 2.5: end for6: return ID1 , ID2 , ID3 , IN1 , IN2 , and IN37: end procedure

Figure 4: Schematic description of our finger-vein capturedevice. It is used capture finger-vein images in 3 differentillumination intensities.

presented as an Algorithm 1.

r(INk (p)) =

n=2∑n=0

n∑m=−n

lnmrnm((INk (p)) (1)

IDk = ρkr(INk (p)) (2)

One can observe from Figure 3, the textural difference inbona fide and artefact finger-vein presentations are empha-sized post decomposition in both normal-map and diffuse-map. Further, with the lighting estimation using sphericalharmonics, we also note that artefacts result in enhancednoise for extracted normal-map and diffuse-map [15]. Theenhanced noise would result in better detection of presenta-tion artefacts.

3.3. Feature Extraction & Classification

Given the set of normal-maps and diffuse-maps of thefinger-vein images, we observe the differences in textural

properties corresponding to bona fide finger-vein and arte-fact finger-vein images as illustrated in Figure 3. To thisextent, we employ LBP, BSIF, and LPQ to extract the fea-tures of normal-map and diffuse-map. We then train a linearSVM classifier for both the normal-maps and diffuse-mapsseparately based on lighting. Further, with the availabilityof three independent scores for normal-map and three in-dependent scores for diffuse-map, we employ score levelSUM-Rule fusion to obtain a final PAD score, which is usedfor classification.

4. Finger-Vein Capture Device & Database4.1. Finger-Vein Capture Device

Given the unavailability of databases for the validation ofour proposed approach, we construct a custom finger-veindatabase using a low-cost custom-built finger-vein capturedevice, whose schematic is shown in Figure 4. Our finger-vein capture device has four important parts, as follows:

• NIR Light Sources.

• Camera and Lens.

• Elevated Physical Structure.

• Slit.

We use 20 LEDs of 980 nm for the near-infrared (NIR) lightsource, and industrial camera DMK 22BUCO3 with a reso-lution of 744× 480 pixels, and T3Z0312CS lens with focallength 8mm is used. Elevated Physical Structure is used toobtain a sufficient amount of light for the image, and slit isfor the placement of the finger. To obtain the finger-veinimages, the correct amount of light must penetrate throughthe finger. The correct amount of light is achieved by plac-ing a physical structure with elevation, which helps in theconcentration of light. The user places the finger in theslit, which is a small gap between NIR light sources andthe camera. The placement of NIR Light Source is thatfinger-veins are blocked while imaging from the camera inwhich that finger-vein patterns appear as dark patterns in thefinger-vein image obtained.

4.2. Finger-Vein Database

We first capture the bona fide finger-vein images for 78unique subjects using 3 illuminations with a resolution of744 × 480 for each image. For each subject, the only rightindex finger is captured in two different sessions over agap of 3-4 weeks. Thus, the whole dataset is comprisedof 78 Subjects with 78 finger-vein samples captured in 3illumination intensity, and 2 sessions = 468 = 78 × 3 × 2bona fide finger-vein samples, and same number of artefacts468 = 78× 3× 2. We generate the Presentation Attack In-strument (PAI) (i.e., artefacts) by printing the finger-vein

Figure 5: DET Curves for LBP, BSIF, and LPQ (w and w/o proposed approach) where initial arcs show 0 EER for (proposedapproach+BSIF, and proposed approach+LPQ) and 0.5 EER for proposed approach+LBP

Method D-EER BPCER@ BPCER@APCER=5% APCER=10%

I1 I2 I3 I1 I2 I3 I1 I2 I3LBP-SVM 16.6 3.5 1.19 51.1 2.3 0 34.5 0 0LPQ-SVM 5.9 8.3 16.6 5.9 11.9 20.2 3.5 7.1 19.0BSIF-SVM 11.9 4.7 9.5 26.1 4.7 9.5 16.6 4.7 9.5

Proposed + LBP (Normal-Map) 7.1 8.3 11.9 8.3 17.8 15.4 3.5 3.5 11.9Proposed + LBP (Diffuse-Map) 0 1.1 1.1 0 0 0 0 0 0Proposed + LPQ (Normal-Map) 1.1 1.1 1.1 0 0 0 0 0 0Proposed + LPQ (Diffuse-Map) 0 0 0 0 0 0 0 0 0Proposed + BSIF (Normal-Map) 2.3 2.3 2.3 0 0 0 0 0 0Proposed + BSIF (Diffuse-Map) 0 0 0 0 0 0 0 0 0

Table 1: Proposed approach with Illumination Intensity 1 (I1), Illumination Intensity 2 (I2), and Illumination Intensity 3 (I3)results for diffuse-map and normal-map.

images using a RICOH MP 8002 printer using a glossyprinting paper. To create high-quality artefacts that canchallenge the finger-vein recognition system, we first en-hance the finger-vein pattern by employing the STRESSalgorithm following the earlier works [20] before printing.We then capture the artefacts using the same capture device

with three different illuminations. We divide the finger-veindataset into training and testing partition by dividing theminto non-overlapping sets of 36 subjects in training with twosessions, and 42 subjects in testing with two sessions. Thus,in training set corresponding to first illumination intensity,we generate 72 = 36×2 bona fide features, and 72 = 36×2

Method D-EER BPCER@ BPCER@APCER=5% APCER=10%

LBP [13] & SVM (Sum-Rule Fusion) 2.3 0 0LPQ [14] & SVM (Sum-Rule Fusion) 9.5 10.7 5.9BSIF [7] & SVM (Sum-Rule Fusion) 4.7 3.5 3.5

Proposed approach + LBP & SVM (Sum-Rule Fusion) 0.5 0 0Proposed approach + LPQ & SVM (Sum-Rule Fusion) 0 0 0Proposed approach + BSIF & SVM (Sum-Rule Fusion) 0 0 0

Table 2: LBP-SVM, BSIF-SVM, and LPQ-SVM with and without proposed approach.

attack features for diffuse-maps, and normal-maps. Sim-ilarly, for illumination intensities two, and three, we gen-erate 72 = 36 × 2 bona fide features, and 72 = 36 × 2attack features for diffuse-maps, and normal-maps. Now,for the testing set for illumination intensity one, we gener-ate 84 = 42 × 2 bona fide scores, and 84 = 42 × 2 attackscores for both diffuse-maps, and normal-maps. The samenumber of testing scores are generated for illumination in-tensities two, and three, which are 84 = 42 × 2 bona-fidescores, and 84 = 42×2 attack scores for both diffuse-maps,and normal-maps.

5. Experimental Results

For the set of experiments conducted in this work, wereport the PAD performance using the metrics defined in theInternational Standard ISO/IEC 30107-3 [6]. We also reportDetection Equal Error Rate (D-EER %) and detection errortrade-off curves.

Table 1 shows results in detail for each light, and normal-map and diffuse-map separately where diffuse-map is per-forming better. Diffuse-Map has more textural differencesthan normal-map. Additionally, it captures the difference inmaterial properties of bona fide finger-vein (skin) v/s arte-fact (printed paper in our case). It can be noted from Ta-ble 1 that with LPQ, 0 D-EER is achieved for diffuse-map,and normal-map. However, LBP on individual normal-maps gives results close to the LBP-SVM applied directlyon the finger-vein image. Table 2 presents the results of theproposed approach obtained by sum-rule fusion and com-pares it with classical texture-descriptor based state-of-artapproaches. As it can be noted from the Table 2, the pro-posed approach outperforms existing SOTA using texture-descriptors, we achieve best D-EER of 0.0% compared tobest D-EER of SOTA of 2.3%. The results can also be seenin Figure 5, which presents the Detection Error Trade-offCurves illustrates that the sum-rule fusion of scores resultsin a higher level of performance.

6. Conclusions & Future-WorkIn this paper, we presented a approach using single im-

age decomposition into normal-map, and diffuse-map of afinger-vein image results in higher accuracy as comparedto directly using texture-descriptors on it. Our techniquedoes not require elaborated pre-processing (image enhance-ment), and manual methods (finger-vein extraction). Wehave further validated our proposed approach on a newlycollected finger vein-image database. Future work will in-clude the evaluation of the proposed approach on a largescale database.

AcknowledgementThis work was carried out under the funding of the Re-

search Council of Norway under Grant No. IKTPLUSS248030/O70.

References[1] Finger-Vein Authentication Brings Ad-

ditional Security to ATMs in China.https://findbiometrics.com/finger-vein-authentication-atms-china-502087/,2018.

[2] A. Anjos, P. Tome, and S. Marcel. An introduction tovein presentation attacks and detection. In Handbookof Biometric Anti-Spoofing, pages 419–438. Springer,2019.

[3] R. Basri and D. W. Jacobs. Lambertian reflectance andlinear subspaces. IEEE Transactions on Pattern Anal-ysis and Machine Intelligence, 25(2):218–233, Feb2003.

[4] C. Busch, S. Venkatesh, K. B. Raja, and R. Ra-machandra. Fingervein presentation attack detectionusing transferable features from deep convolution neu-ral networks. In Deep Learning in Biometrics, pages295–305. CRC Press, 2018.

[5] D. Cho, Y. Matsushita, Y. W. Tai, and I. S. Kweon.Semi-calibrated photometric stereo. IEEE Transac-

tions on Pattern Analysis and Machine Intelligence,pages 1–1, 2018.

[6] ISO/IEC JTC1 SC37 Biometrics. ISO/IEC FDIS30107-3. Information Technology - Biometric presen-tation attack detection - Part 3: Testing and Reporting.International Organization for Standardization, 2017.

[7] K. J. and R. E. BSIF: binarized statistical image fea-tures. In Proc. 21st International Conference on Pat-tern Recognition (ICPR 2012), Tsukuba, Japan, pages1363–1366, 2012.

[8] D. Kocher, S. Schwarz, and A. Uhl. Empirical eval-uation of lbp-extension features for finger vein spoof-ing detection. In 2016 International Conference of theBiometrics Special Interest Group (BIOSIG), pages 1–5, Sep. 2016.

[9] M. Kono, S.-i. Umemura, T. Miyatake, K. Harada,Y. Ito, and H. Ueki. Personal identification system,Nov. 2 2004. US Patent 6,813,010.

[10] B. Maser, D. Sollinger, and A. Uhl. Prnu-based de-tection of finger vein presentation attacks. In 2019 7thInternational Workshop on Biometrics and Forensics(IWBF), pages 1–6. IEEE, 2019.

[11] D. T. Nguyen, Y. H. Park, K. Y. Shin, S. Y. Kwon,H. C. Lee, and K. R. Park. Fake finger-vein image de-tection based on fourier and wavelet transforms. Dig-ital Signal Processing, 23(5):1401–1413, 2013.

[12] D. T. Nguyen, H. S. Yoon, T. D. Pham, and K. R. Park.Spoof detection for finger-vein recognition system us-ing nir camera. Sensors, 17(10):2261, 2017.

[13] T. Ojala, M. Pietikainen, and T. Maenpaa. Multires-olution gray-scale and rotation invariant texture clas-sification with local binary patterns. IEEE Transac-tions on Pattern Analysis and Machine Intelligence,24(7):971–987, July 2002.

[14] V. Ojansivu and J. Heikkila. Blur insensitive textureclassification using local phase quantization. In A. El-moataz, O. Lezoray, F. Nouboud, and D. Mammass,editors, Image and Signal Processing, pages 236–243,Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.

[15] Ping-Man Lam, Chi-Sing Leung, and Tien-TsinWong. Noise-resistant fitting for spherical harmon-ics. IEEE Transactions on Visualization and Com-puter Graphics, 12(2):254–265, March 2006.

[16] X. Qiu, W. Kang, S. Tian, W. Jia, and Z. Huang. Fin-ger vein presentation attack detection using total vari-ation decomposition. IEEE Transactions on Informa-tion Forensics and Security, 13(2):465–477, 2017.

[17] X. Qiu, S. Tian, W. Kang, W. Jia, and Q. Wu. Fingervein presentation attack detection using convolutionalneural networks. In Chinese Conference on BiometricRecognition, pages 296–305. Springer, 2017.

[18] R. Raghavendra, M. Avinash, S. Marcel, andC. Busch. Finger vein liveness detection using motionmagnification. In 2015 IEEE 7th International Con-ference on Biometrics Theory, Applications and Sys-tems (BTAS), pages 1–7, Sep. 2015.

[19] R. Raghavendra and C. Busch. Presentation attack de-tection algorithms for finger vein biometrics: A com-prehensive study. In 2015 11th International Confer-ence on Signal-Image Technology Internet-Based Sys-tems (SITIS), pages 628–632, Nov 2015.

[20] R. Ramachandra, K. Raja, S. Venkatesh, andC. Busch. Design and development of low-cost sensorto capture ventral and dorsal finger-vein for biometricauthentication. IEEE Sensors Journal, 19(15):6102–6111, August 2019.

[21] S. Sengupta, A. Kanazawa, C. D. Castillo, and D. W.Jacobs. Sfsnet: Learning shape, reflectance and il-luminance of faces ‘in the wild’. In The IEEE Con-ference on Computer Vision and Pattern Recognition(CVPR), pages 6296–6305, June 2018.

[22] S. Tirunagari, N. Poh, M. Bober, and D. Windridge.Windowed dmd as a microtexture descriptor for fingervein counter-spoofing in biometrics. In 2015 IEEEInternational Workshop on Information Forensics andSecurity (WIFS), pages 1–6, Nov 2015.

[23] P. Tome, R. Raghavendra, C. Busch, S. Tirunagari,N. Poh, B. H. Shekar, D. Gragnaniello, C. Sansone,L. Verdoliva, and S. Marcel. The 1st competition oncounter measures to finger vein spoofing attacks. In2015 International Conference on Biometrics (ICB),pages 513–518, May 2015.

[24] P. Tome, M. Vanoni, and S. Marcel. On the vulnera-bility of finger vein recognition to spoofing. In 2014International Conference of the Biometrics Special In-terest Group (BIOSIG), pages 1–10, Sep. 2014.

[25] M. Vanoni, P. Tome, L. El Shafey, and S. Marcel.Cross-database evaluation using an open finger veinsensor. In 2014 IEEE Workshop on Biometric Mea-surements and Systems for Security and Medical Ap-plications (BIOMS) Proceedings, pages 30–35, Oct2014.