fingerprint verification competition 2006

3
Biometric Technology Today • July/August 2007 7 FEATURE To date, only a few benchmarks have been available for comparing advances in fingerprint verification, thus leading to the dissemination of confusing, incomparable and irreproducible results. The aim of organizing the Fingerprint Verification Competition was to take the first step towards the establishment of a common basis to better understand the state-of-the-art and what can be expected from the fingerprint technology in the future. In 2006, following the interest raised by the three previous editions, the Biometric System Laboratory (University of Bologna), the Pattern Recognition and Image Processing Laboratory (Michigan State University), the Biometric Test Center (San Jose State University) and the Biometrics Research Lab - ATVS (Universidad Autonoma de Madrid) organized the fourth Fingerprint Verification Competition (FVC2006). FVC focuses on fingerprint verification software assessment. The common objective of the different editions of the competition is to establish a benchmark, thus allowing companies and academic institutions to unambiguously compare performance and track improvements in their fingerprint recognition algorithms. The evaluation is independent and strongly supervised: all the algorithms are tested at the evaluators’ site, on evaluators’ hardware in a totally- controlled environment, where all input/output operations are strictly monitored. Two different sub-competitions were organized using the same databases: Light category, intended for algorithms characterized by low computational resources, limited memory usage and small template size; Open category for which no restrictive requirements are given, except those needed for practical testing reasons. The limits imposed for the two categories are reported in Table 1. Each participating group was allowed to submit one algorithm in each category. 150 registrations were received and finally 53 competitors participated by submitting 70 algorithms: 44 in the Open Category and 26 in the Light Category. Databases Four databases created using three different scanners and the SFinGe synthetic generator were used in the FVC2006 benchmark (see Table 2 ). Figure 1 shows an example image at the same scale factor from each database. Different from the first three editions of FVC, data collection in FVC2006 was performed without deliberately Fingerprint verification competition 2006 The interest in fingerprint-based biometric systems has constantly grown in recent years and considerable efforts have been focused by both academia and industry on the development of new algorithms for fingerprint recognition. Raffaele Cappelli, Matteo Ferrara, Annalisa Franco and Davide Maltoni of the Biometric System Laboratory at the University of Bologna explain the findings of the Fingerprint Verification Competition 2006. Open Light Maximum time for enrollment 10.0 sec 0.3 sec Maximum time for comparison 5.0 sec 0.1 sec Maximum template size / 2 KBytes Maximum amount of memory allocated / 4 MBytes Table 1. Limits imposed for the Open and Light categories. Technology Image Resolution DB1 Electric Field Sensor (AuthenTec) 96×96 250 dpi DB2 Optical Sensor (BiometriKa) 400×560 569 dpi DB3 Thermal Sweeping Sensor (Atmel) 400×500 500 dpi DB4 Synthetic Generator (SFinGe v3.0) 288×384 About 500 dpi Table 2. Scanners/technologies used for collecting the databases.

Upload: raffaele-cappelli

Post on 16-Sep-2016

220 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Fingerprint verification competition 2006

Biometric Technology Today • July/August 20077

FEATURE

To date, only a few benchmarks have been available for comparing advances in fingerprint verification, thus leading to the dissemination of confusing, incomparable and irreproducible results. The aim of organizing the Fingerprint Verification Competition was to take the first step towards the establishment of a common basis to better understand the state-of-the-art and what can be expected from the fingerprint technology in the future.

In 2006, following the interest raised by the three previous editions, the Biometric System Laboratory (University of Bologna), the Pattern Recognition and Image Processing Laboratory (Michigan State University), the Biometric Test Center (San Jose State University) and the Biometrics Research Lab - ATVS (Universidad Autonoma de Madrid) organized the fourth Fingerprint Verification Competition (FVC2006).

FVC focuses on fingerprint verification software assessment. The common objective of the different editions of the competition is to establish a benchmark, thus allowing companies and academic institutions to unambiguously compare performance and track improvements in their fingerprint recognition algorithms. The evaluation is independent and strongly supervised: all the algorithms

are tested at the evaluators’ site, on evaluators’ hardware in a totally-controlled environment, where all

input/output operations are strictly monitored.

Two different sub-competitions were organized using the same databases:• Light category, intended for algorithms

characterized by low computational resources, limited memory usage and small template size;

• Open category for which no restrictive requirements are given, except those needed for practical testing reasons.

The limits imposed for the two categories are reported in Table 1.

Each participating group was allowed to submit one algorithm in each category. 150 registrations were received and finally 53 competitors participated by submitting 70 algorithms: 44 in the Open Category and 26 in the Light Category.

DatabasesFour databases created using three different scanners and the SFinGe synthetic generator were used in the FVC2006 benchmark (see Table 2). Figure 1 shows an example image at the same scale factor from each database.

Different from the first three editions of FVC, data collection in FVC2006 was performed without deliberately

Fingerprint verification competition 2006The interest in fingerprint-based biometric systems has constantly grown in recent years and considerable efforts have been focused by both academia and industry on the development of new algorithms for fingerprint recognition. Raffaele Cappelli, Matteo Ferrara, Annalisa Franco and Davide Maltoni of the Biometric System Laboratory at the University of Bologna explain the findings of the Fingerprint Verification Competition 2006.

Open Light

Maximum time for enrollment 10.0 sec 0.3 sec

Maximum time for comparison 5.0 sec 0.1 sec

Maximum template size / 2 KBytes

Maximum amount of memory allocated

/ 4 MBytes

Table 1. Limits imposed for the Open and Light categories.

Technology Image Resolution

DB1Electric Field Sensor (AuthenTec)

96×96 250 dpi

DB2 Optical Sensor (BiometriKa) 400×560 569 dpi

DB3Thermal Sweeping Sensor (Atmel)

400×500 500 dpi

DB4Synthetic Generator (SFinGe v3.0)

288×384 About 500 dpi

Table 2. Scanners/technologies used for collecting the databases.

Page 2: Fingerprint verification competition 2006

8Biometric Technology Today • July/August 2007

FEATURE

introducing difficulties such as exaggerated distortion, large amounts of rotation and displacement, wet/dry impressions, etc, but the population is more heterogeneous and also includes manual workers and elderly people.

The images were collected in three European countries within the European BioSec project; the same population and acquisition protocol were used for all the three acquisition scanners. The final FVC2006 datasets were selected from a larger database by choosing the most difficult fingers according to the NIST quality index.

Testing protocolParticipants submitted each algorithm in the form of two executable programs:

the first for enrolling a fingerprint image and producing the corresponding template, and the second for comparing a fingerprint template to a fingerprint image and producing a comparison score in the range [0,1]. The input includes a database-specific configuration file. Each algorithm is tested by performing, for each database, the fol lowing comparisons:• Genuine recognition attempts: the

template of each fingerprint image is compared to the remaining images of the same finger, but avoiding symmetric matches;

• Impostor recognition attempts: the template of the first image of each finger is compared to the first

image of the remaining fingers, but avoiding symmetric matches.

For each database a total of 1540 enrolment attempts, 9240 genuine compar i son and 9730 impostor comparisons are per formed. The evaluation for both categories was carried out using Windows XP Professional O.S. PCs with Intel Pentium 4 at 3.20Ghz and 1GB of RAM.

The schema in Figure 2 summarizes the testing procedure of FVC2006.

ResultsThe following performance indicators were measured and made available at the FVC2006 web site (http://bias.csr.unibo.it/fvc2006/)• Genuine and impostor score

histograms;• False Match Rate (FMR) and False

Non-Match Rate (FNMR) graphs and Decision Error Trade-off (DET) graph;

• Failure-to-Enrol Rate (FTE) and Failure-to-Compare Rate (FTC) which can be reported by the algorithm or imposed by the test procedure in case of timeout, program crash, exceeded memory limit, exceeded template limit or missing template;

• Equal Error Rate (EER), FMR100, FMR1000, ZeroFMR and ZeroFNMR;

Dat

abas

e

Alg

orit

hm

EER

(%)

FMR1

00 (%

)

FMR1

000

(%)

Zero

FMR

(%)

FTE

(%)

FTC

(%)

Avg

Enro

ll Ti

me

(s)

Avg

Com

pari

son

Tim

e (s

)

Avg

Mod

el S

ize

(KB)

Max

Mod

el S

ize

(KB)

Max

Enr

oll

Mem

ory

(KB)

Max

Mat

ch

Mem

ory

(KB)

DB1

P017 5.564 9.708 15.335 22.922 0.00 0.00 0.038 0.039 1.22 1.66 1472 2172

P066 5.978 9.556 14.167 19.405 0.00 0.00 0.430 0.506 5.63 10.84 8080 8520

P045 6.122 10.498 22.348 41.494 0.00 0.00 0.074 0.311 3.88 5.23 1476 1916

DB2

P088 0.021 0.011 0.022 0.022 0.00 0.00 1.085 1.100 1.60 3.17 6224 7460

P015 0.032 0.022 0.032 0.032 0.00 0.00 1.434 1.461 2.76 5.54 19744 24956

P009 0.095 0.000 0.097 0.249 0.00 0.00 0.799 0.899 2.00 3.19 6228 6404

DB3

P015 1.534 1.753 2.760 7.175 0.00 0.00 1.682 1.678 5.88 10.67 20324 25528

P058 1.608 1.786 2.413 3.755 0.00 0.00 0.307 0.307 4.34 12.31 3272 3292

P009 1.645 1.937 3.030 3.929 0.00 0.00 0.583 0.641 1.81 3.06 6020 6196

DB4

P009 0.269 0.141 0.400 0.703 0.00 0.00 0.564 0.601 1.69 2.43 5268 5436

P074 0.453 0.368 0.823 1.732 0.00 0.00 0.230 0.238 5.04 8.51 2068 2084

P066 0.466 0.465 0.855 2.045 0.00 0.00 0.725 1.023 7.21 11.00 3284 6688

Table 3. Open category: top three algorithms for each database, sorted by EER.

Figure 1. A fingerprint image from each database, at the same scale factor.

Page 3: Fingerprint verification competition 2006

9Biometric Technology Today • July/August 2007

• Average enrolment time and average comparison time;

• Maximum memory allocated for enrolment and for comparison;

• Average and maximum model size.The results over the four databases of

the top three algorithms in the Open and Light categories are reported in Table 3 and Table 4 respectively.

Although an in-depth analysis of the results is outside the aims of this feature, two simple remarks can be made:• Fingerprint scanners have a huge

impact on the accuracy: the best EER on DB1 is 5.564%, while the best EER on DB2 is 0.021% (both databases were collected from the same

population, the former using a small-area scanner with a low resolution, the second using a large-area scanner);

• Constraints on the execution time, template size and amount of memory used (Light category) cause a sensible performance drop (for instance the

best EER on DB2 in the Light category is seven times worse than the best result in the Open category).

ConclusionsPerformance evaluation is important for all pattern recognition applications and particularly so for biometrics, which is receiving widespread international

attention for citizen identity verification and ident i f icat ion in large-sca le appl icat ions. Unambiguously and reliably assessing the current state of the technology is essential for understanding its limitations and addressing future research requirements.

The interest shown in the FVC testing program by algorithm developers continues to be very high: the fingerprint databases of the three previous editions constitute the most frequently used benchmarking databases in scientific publications on fingerprint recognition; in this fourth edition (FVC2006), a total of 70 algorithms, submitted by 53 participants, have been evaluated by the organizers.

The huge amount of data collected during the tests (not only match scores, but also execution times, template size, etc.), together with the high-level information on the algorithms is currently being analyzed to gain more insights into the current state-of-the art of this challenging pattern recognition problem.

This feature was provided by Raffaele Cappelli, Matteo Ferrara, Annalisa Franco and Davide Maltoni of the Biometric System Laboratory, University of Bologna. They can be can be contacted at e-mail: [email protected], [email protected], [email protected] and [email protected]

Dat

abas

e

Alg

orit

hm

EER

(%)

FMR1

00 (%

)

FMR1

000

(%)

Zero

FMR

(%)

FTE

(%)

FTC

(%)

Avg

Enro

ll Ti

me

(s)

Avg

Com

pari

son

Tim

e (s

)

Avg

Mod

el S

ize

(KB)

Max

Mod

el S

ize

(KB)

Max

Enr

oll

Mem

ory

(KB)

Max

Mat

ch

Mem

ory

(KB)

DB1

P129 5.356 8.225 13.323 100.000 0.32 1.01 0.031 0.029 1.94 1.94 1860 2028

P017 5.564 9.708 15.335 22.922 0.00 0.00 0.037 0.039 1.22 1.66 1472 2172

P133 5.888 8.506 12.890 100.000 0.32 1.01 0.039 0.036 1.71 1.71 2888 3032

DB2

P065 0.148 0.087 0.173 0.422 0.00 0.01 0.092 0.091 0.23 0.46 1924 1948

P133 0.158 0.087 0.195 0.335 0.00 0.00 0.064 0.066 1.71 1.71 3272 3416

P129 0.169 0.087 0.206 0.325 0.00 0.00 0.056 0.056 1.94 1.94 2236 2420

DB3

P133 1.634 1.742 3.009 4.329 0.06 0.79 0.054 0.056 1.71 1.71 3252 3396

P129 1.645 1.753 2.987 4.210 0.06 0.79 0.047 0.046 1.94 1.94 2216 2392

P058 2.351 2.738 4.686 6.526 0.00 0.00 0.121 0.082 0.87 1.83 2208 1708

DB4

P121 0.427 0.249 0.747 1.039 0.00 0.00 0.049 0.049 0.87 1.61 1736 1760

P129 0.496 0.400 1.115 1.742 0.00 0.00 0.055 0.054 1.94 1.94 2176 2344

P133 0.522 0.465 1.039 2.327 0.00 0.00 0.063 0.062 1.71 1.71 3212 3344

Table 4. Light category: top three algorithms for each database, sorted by EER.

Figure 2. Testing procedure.

FEATURE