iris scanning seminar report

42
IRIS RECOGNITION SEMINAR REPORT 2011 Chapter1:INTRODUCTION 1.1:Biometric Technology Biometrics is the science of establishing human identity by using physical or behavioral traits such as face, fingerprints, palm prints, iris, hand geometry, and voice. Iris recognition systems, in particular, are gaining interest because the iris’s rich texture offers a strong biometric cue for recognizing individuals. Located just behind the cornea and in front of the lens, the iris uses the dilator and sphincter muscles that govern pupil size to control the amount of light that enters the eye. Near-infrared (NIR) images of the iris’s anterior surface exhibit complex patterns that computer systems can use to recognize individuals. Because NIR lighting can penetrate the iris’s surface, it can reveal the intricate texture details that are present even in dark- colored irides. The iris’s textural complexity and its variation across eyes have led scientists to postulate that the iris is unique across individuals. Further, the iris is the only internal organ readily visible from the outside. Thus, unlike fingerprints or palm prints, environmental effects cannot easily alter its pattern. An iris recognition system uses pattern matching to compare two iris images and generate a match score that reflects their degree of similarity or dissimilarity. Dept. Of Electronics and Communication Engineering, VKIT Page 1

Upload: pavan-reddy

Post on 28-Nov-2014

1.504 views

Category:

Documents


12 download

TRANSCRIPT

Page 1: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Chapter1:INTRODUCTION

1.1:Biometric Technology

Biometrics is the science of establishing human identity by using physical or

behavioral traits such as face, fingerprints, palm prints, iris, hand geometry, and voice. Iris

recognition systems, in particular, are gaining interest because the iris’s rich texture offers a

strong biometric cue for recognizing individuals. Located just behind the cornea and in front

of the lens, the iris uses the dilator and sphincter muscles that govern pupil size to control the

amount of light that enters the eye. Near-infrared (NIR) images of the iris’s anterior surface

exhibit complex patterns that computer systems can use to recognize individuals. Because

NIR lighting can penetrate the iris’s surface, it can reveal the intricate texture details that are

present even in dark-colored irides. The iris’s textural complexity and its variation across

eyes have led scientists to postulate that the iris is unique across individuals. Further, the iris

is the only internal organ readily visible from the outside. Thus, unlike fingerprints or palm

prints, environmental effects cannot easily alter its pattern. An iris recognition system uses

pattern matching to compare two iris images and generate a match score that reflects their

degree of similarity or dissimilarity.

Iris recognition systems are already in operation worldwide, including an expellee

tracking system in the United Arab Emirates, a welfare distribution program for Afghan

refugees in Pakistan, a border-control immigration system at Schiphol Airport in the

Netherlands, and a frequent traveler program for preapproved low-risk travelers crossing the

US-Canadian border.

Iris recognition efficacy is rarely impeded by glasses or contact lenses. Iris technology

has the smallest outlier (those who cannot use/enroll) group of all biometric technologies.

Because of its speed of comparison, iris recognition is the only biometric technology well-

suited for one-to-many identification. A key advantage of iris recognition is its stability, or

template longevity, as, barring trauma, a single enrollment can last a lifetime.

Dept. Of Electronics and Communication Engineering, VKIT Page 1

Page 2: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

1.2.History and Development of Biometrics

The idea of using patterns for personal identification was originally proposed in 1936

by ophthalmologist Frank Burch. By the 1980’s the idea had appeared in James Bond films,

but it still remained science fiction and conjecture. In 1987, two other ophthalmologists Aram

Safir and Leonard Flom patented this idea and in 1987 they asked John Daugman to try to

create actual algorithms for this iris recognition. These algorithms which Daugman patented

in 1994 are the basis for all current iris recognition systems and products.

Daugman algorithms are owned by Iridian technologies and the process is licensed to

several other Companies who serve as System integrators and developers of special platforms

exploiting iris recognition in recent years several products have been developed for acquiring

its images over a range of distances and in a variety of applications. One active imaging

system developed in 1996 by licensee Sensar deployed special cameras in bank ATM to

capture IRIS images at a distance of up to 1 meter. This active imaging system was installed

in cash machines both by NCR Corps and by Diebold Corp in successful public trials in

several countries during I997 to 1999. a new and smaller imaging device is the low cost

“Panasonic Authenticam” digital camera for handheld, desktop, e-commerce and other

information security applications. Ticket less air travel, check-in and security procedures

based on iris recognition kiosks in airports have been developed by eye ticket. Companies in

several, countries are now using Daughman’s algorithms in a variety of products.

1.2Classification of Biometrics

Dept. Of Electronics and Communication Engineering, VKIT Page 2

Page 3: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Fig: Classification of Biometrics

Biometrics encompasses both physiological and behavioral characteristics. A

physiological characteristic is a relatively stable physical feature such as finger print, iris

pattern, retina pattern or a Facial feature. A behavioral trait in identification is a person’s

signature, keyboard typing pattern or a speech pattern. The degree of interpersonal variation

is smaller in a physical characteristic than in a behavioral one. For example, the person’s iris

pattern is same always but the signature is influenced by physiological characteristics.

Disadvantages

Even though conventional methods of identification are indeed inadequate, the

biometric technology is not as pervasive and wide spread as many of us expect it to be. One

of the primary reasons is performance. Issues affecting performance include accuracy, cost,

integrity etc.

Accuracy

Even if a legitimate biometric characteristic is presented to a biometric system,

correct authentication cannot be guaranteed. This could be because of sensor noise,

limitations of processing methods, and the variability in both biometric characteristic as well

as its presentation.

Cost

Cost is tied to accuracy; many applications like logging on to a pc are sensitive to

additional cost of including biometric technology.

Dept. Of Electronics and Communication Engineering, VKIT Page 3

HAND

SIGNATURE

IRIS

FACE FINGER

VOICE

COST

ACCURACY

RETINA

Page 4: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

FIGURE 2. COMPARISON BETWEEN COST AND ACCURACY

Chapter2:IRIS RECOGNITION

2.1 Anatomy of the Human Iris

The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human

eye. A front-on view of the iris is shown in Figure 1.1. The iris is perforated close to its

center by a circular aperture known as the pupil. The function of the iris is to control the

amount of light entering through the pupil, and this is done by the sphincter and the dilator

muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the

pupil size can vary from 10% to 80% of the iris diameter [2]. The iris consists of a number of

layers; the lowest is the epithelium layer, which contains dense pigmentation cells. The

stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and

the two iris muscles. The density of stromal pigmentation determines the colour of the iris.

The externally visible surface of the multi-layered iris contains two zones, which often differ

in colour [3]. An outer ciliary zone and an inner pupillary zone, and these two zones are

divided by the collarette – which appears as a zigzag pattern.

Fig 1.1 :A front-on view of the human eye

Formation of the iris begins during the third month of embryonic life . The unique pattern on

the surface of the iris is formed during the first year of life, and pigmentation of the stroma

takes place for the first few years. Formation of the unique patterns of the iris is random and

not related to any genetic factors . The only characteristic that is dependent on genetics is the

pigmentation of the iris, which determines its colour. Due to the epigenetic nature of iris

Dept. Of Electronics and Communication Engineering, VKIT Page 4

Page 5: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

patterns, the two eyes of an individual contain completely independent iris patterns, and

identical twins possess uncorrelated iris patterns

2.2History and development of IRIS

The human iris begins to form during the third month of gestation. The structures creating its

distinctive pattern are completed by the eighth month of gestation hut pigmentation continues

in the first years after birth. The layers of the iris have both ectodermic and embryological

origin, consisting of: a darkly pigmented epithelium, pupillary dilator and sphincter muscles,

heavily vascularized stroma and an anterior layer chromataphores with a genetically

determined density of melanin pigment granules. The combined effect is a visible pattern

displaying various distinct features such as arching ligaments, crypts, ridges and zigzag

collaratte. Iris color is determined mainly by the density of the stroma and its melanin

content, with blue irises resulting from an absence of pigment: long wavelengths are

penetrates and is absorbed by the pigment epithelium, while shorter wavelengths are reflected

and scattered by the stroma. The heritability and ethnographic diversity of iris color have

long been studied. But until the present research, little attention had been paid to the

achromatic pattern complexity and textural variability of the iris among individuals.

A permanent visible characteristic of an iris is the trabecular mesh work, a tissue

which gives the appearance of dividing the iris in a radial fashion. Other visible

characteristics include the collagenous tissue of the stroma, ciliary processes, contraction

furrows, crypts, rings, a corona and pupillary frill coloration and sometimes freckle. The

striated anterior layer covering the trabecular mesh work creates the predominant texture with

visible light.

2.3STAGES INVOLVED IN IRIS DETECTION

It includes Three Main Stages

2.3.1) Image Acquisition and Segmentation

2.3.2) Image Normalization

2.3.3) Feature Coding and Matching

Dept. Of Electronics and Communication Engineering, VKIT Page 5

Page 6: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

2.3.1 IMAGE ACQUISITION AND SEGMENTATION

IMAGE ACQUISITION

One of the major challenges of automated iris recognition is to capture a high-quality image

of the iris while remaining non invasive to the human operator.

Concerns on the image acquisition rigs

Obtained images with sufficient resolution and sharpness

Good contrast in the interior iris pattern with proper illumination

Well centered without unduly constraining the operator

Artifacts eliminated as much as possible

Fig :The Daugman image-acquisition System

Dept. Of Electronics and Communication Engineering, VKIT Page 6

Page 7: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Fig:Wildes et al image-acquisition System

Segmentation

The first stage of iris recognition is to isolate the actual iris region in a digital eye image.

The iris region, shown in Figure 1.1, can be approximated by two circles, one for the

iris/sclera boundary and another, interior to the first, for the iris/pupil boundary. The eyelids

and eyelashes normally occlude the upper and lower parts of the iris region. Also, specular

reflections can occur within the iris region corrupting the iris pattern. A technique is required

to isolate and exclude these artifacts as well as locating the circular iris region.

This can be done by using the following techniques

Hough Transform

Daugman Integro- Differential operator

Hough Transform:

The Hough transform is a standard computer vision algorithm that can be used to

determine the parameters of simple geometric objects, such as lines and circles, present in an

image. The circular Hough transform can be employed to deduce the radius and centre

coordinates of the pupil and iris regions. An automatic segmentation algorithm based on the

circular Hough transform is employed by Wildes et al.Firstly, an edge map is generated by

calculating the first derivatives of intensity values in an eye image and then thresholding the

result. From the edge map, votes are cast in Hough space for the parameters of circles passing

through each edge point. These parameters are the centre coordinates xc and yc, and the

Dept. Of Electronics and Communication Engineering, VKIT Page 7

The Daugman image-acquisition rigThe Daugman image-acquisition rig

The Daugman image-acquisition rigThe Daugman image-acquisition rig

Page 8: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

radius r, which are able to define any circle according to the equation

A maximum point in the Hough space will correspond to the radius and centre coordinates of

the circle best defined by the edge points. Wildes et al. make use of the parabolic Hough

transform to detect the eyelids, approximating the upper and lower eyelids with parabolic

arcs, which are represented as;

In performing the preceding edge detection step, Wildes et al. bias the derivatives in the

horizontal direction for detecting the eyelids, and in the vertical direction for detecting the

outer circular boundary of the iris, this is illustrated in Figure shown below. The motivation

for this is that the eyelids are usually horizontally aligned, and also the eyelid edge map will

corrupt the circular iris boundary edge map if using all gradient data. Taking only the vertical

gradients for locating the iris boundary will reduce influence of the eyelids when performing

circular Hough transform, and not all of the edge pixels defining the circle are required for

successful localisation. Not only does this make circle localisation more accurate, it also

makes it more efficient, since there are less edge points to cast votes in the Hough space.

Figure 2.1– a) an eye image b) corresponding edge map c) edge map with only horizontal

gradients d) edge map with only vertical gradients.

Daugman’s Integro-differential Operator

Dept. Of Electronics and Communication Engineering, VKIT Page 8

Page 9: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Daugman makes use of an integro-differential operator for locating the circular iris and pupil

regions, and also the arcs of the upper and lower eyelids. The integro-differential operator is

defined as

where I(x,y) is the eye image, r is the radius to search for, Gσ(r) is a Gaussian smoothing

function, and s is the contour of the circle given by r, x0, y0. The operator searches for the

circular path where there is maximum change in pixel values, by varying the radius and

centre x and y position of the circular contour. The operator is applied iteratively with the

amount of smoothing progressively reduced in order to attain precise localisation. Eyelids are

localised in a similar manner, with the path of contour integration changed from circular to an

arc.

Implementation of Image Acquisition

It was decided to use circular Hough transform for detecting the iris and pupil boundaries.

This involves first employing Canny edge detection to generate an edge map.

Eyelids were isolated by first fitting a line to the upper and lower eyelid using the linear

Hough transform. A second horizontal line is then drawn, which intersects with the first line

at the iris edge that is closest to the pupil. This process is illustrated in Figure shown below

and is done for both the top and bottom eyelids. The second horizontal line allows maximum

isolation of eyelid regions. Canny edge detection is used to create an edge map, and only

horizontal gradient information is taken.

Dept. Of Electronics and Communication Engineering, VKIT Page 9

Page 10: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Figure - Stages of segmentation with original eye image Top right) two circles overlaid for

iris and pupil boundaries, and two lines for top and bottom eyelid Bottom left) horizontal

lines are drawn for each eyelid from the lowest/highest point of the fitted line Bottom right)

probable eyelid and specular reflection areas isolated (black areas)

2.3.2:Image Normalization

Once the iris region is successfully segmented from an eye image, the next stage is to

transform the iris region so that it has fixed dimensions in order to allow comparisons. The

dimensional inconsistencies between eye images are mainly due to the stretching of the iris

caused by pupil dilation from varying levels of illumination. Other sources of inconsistency

include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye

within the eye socket. The normalisation process will produce iris regions, which have the

same constant dimensions, so that two photographs of the same iris under different conditions

will have characteristic features at the same spatial location.

This is done by following Techniques

Dept. Of Electronics and Communication Engineering, VKIT Page 10

Page 11: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Daugman’s Rubber Sheet Model

The homogenous rubber sheet model devised by Daugman [1] remaps each point within the

iris region to a pair of polar coordinates (r,θ) where r is on the interval [0,1] and θ is angle

[0,2π]

Fig: Daugman’s Rubber Sheet Model

The remapping of the iris region from (x,y) Cartesian coordinates to the normalised non-

concentric polar representation is modeled as

.

where I(x,y) is the iris region image, (x,y) are the original Cartesian coordinates, (r,θ) are the

corresponding normalised polar coordinates, and Xp,Yp and Xl,Yl are the coordinates of the

pupil and iris boundaries along the θ direction. The rubber sheet model takes into account

pupil dilation and size inconsistencies in order to produce a normalised representation with

constant dimensions. In this way the iris region is modelled as a flexible rubber sheet

anchored at the iris boundary with the pupil centre as the reference point.

Dept. Of Electronics and Communication Engineering, VKIT Page 11

Page 12: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Fig : Illustration of the normalisation process for two images of the same iris taken under

varying conditions

Normalisation of two eye images of the same iris is shown in Figure . The pupil is smaller in

the bottom image, however the normalisation process is able to rescale the iris region so that

it has constant dimension. In this example, the rectangular representation is constructed from

10,000 data points in each iris region. Note that rotational inconsistencies have not been

accounted for by the normalisation process, and the two normalised patterns are slightly

misaligned in the horizontal (angular) direction. Rotational inconsistencies will be accounted

for in the matching stage.

2.3.3Feature Encoding and Matching

In order to provide accurate recognition of individuals, the most discriminating information

present in an iris pattern must be extracted. Only the significant features of the iris must be

encoded so that comparisons between templates can be made. Most iris recognition systems

make use of a band pass decomposition of the iris image to create a biometric template. The

Dept. Of Electronics and Communication Engineering, VKIT Page 12

Page 13: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

template that is generated in the feature encoding process will also need a corresponding

matching metric, which gives a measure of similarity between two iris templates. This metric

should give one range of values when comparing templates generated from the same eye,

known as intra-class comparisons, and another range of values when comparing templates

created from different irises, known as inter-class comparisons. These two cases should give

distinct and separate values, so that a decision can be made with high confidence as to

whether two templates are from the same iris, or from two different irises.

avelet Encoding Wavelets can be used to decompose the data in the iris region into

components that appear at different resolutions. Wavelets have the advantage over traditional

Fourier transform in that the frequency data is localised, allowing features which occur at the

same position and resolution to be matched up. A number of wavelet filters, also called a

bank of wavelets, is applied to the 2D iris region, one for each resolution with each wavelet a

scaled version of some basis function. The output of applying the wavelets is then encoded in

order to provide a compact and discriminating representation of the iris pattern.

Gabor Filters Gabor filters are able to provide optimum conjoint representation of a signal in

space and spatial frequency. A Gabor filter is constructed by modulating a sine/cosine wave

with a Gaussian. This is able to provide the optimum conjoint localisation in both space and

frequency, since a sine wave is perfectly localised in frequency, but not localised in space.

Modulation of the sine with a Gaussian provides localisation in space, though with loss of

localisation in frequency. Decomposition of a signal is accomplished using a quadrature pair

of Gabor filters, with a real part specified by a cosine modulated by a Gaussian, and an

imaginary part specified by a sine modulated by a Gaussian. The real and imaginary filters

are also known as the even symmetric and odd symmetric components respectively. The

centre frequency of the filter is specified by the frequency of the sine/cosine wave, and the

bandwidth of the filter is specified by the width of the Gaussian. Daugman makes uses of a

2D version of Gabor filters [1] in order to encode iris pattern data. A 2D Gabor filter over the

an image domain (x,y) is represented as

Dept. Of Electronics and Communication Engineering, VKIT Page 13

Page 14: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

where (xo,yo) specify position in the image, (α,β) specify the effective width and length, and

(uo, vo) specify modulation, which has spatial frequency .The odd symmetric and even

symmetric 2D Gabor filters are shown in Figure shown below

Fig : quadrature pair of 2D Gabor filters left) real component or even symmetric filter

characterised by a cosine modulated by a Gaussian right) imaginary component or odd

symmetric filter characterised by a sine modulated by a Gaussian.

Matching:

For matching, the Hamming distance was chosen as a metric for recognition, since bit-wise

comparisons were necessary. The Hamming distance algorithm employed also incorporates

noise masking, so that only significant bits are used in calculating the Hamming distance

between two iris templates. Now when taking the Hamming distance, only those bits in the

iris pattern that correspond to ‘0’ bits in noise masks of both iris patterns will be used in the

calculation. The Hamming distance will be calculated using only the bits generated from the

true iris region, and this modified Hamming distance formula is given as

Dept. Of Electronics and Communication Engineering, VKIT Page 14

Page 15: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

where Xj and Yj are the two bit-wise templates to compare, Xnj and Ynj are the

corresponding noise masks for Xj and Yj, and N is the number of bits represented by each

template. Although, in theory, two iris templates generated from the same iris will have a

Hamming distance of 0.0, in practice this will not occur. Normalisation is not perfect, and

also there will be some noise that goes undetected, so some variation will be present when

comparing two intra-class iris templates. In order to account for rotational inconsistencies,

when the Hamming distance of two templates is calculated, one template is shifted left and

right bit-wise and a number of Hamming distance values are calculated from successive

shifts. This bit-wise shifting in the horizontal direction corresponds to rotation of the original

iris region by an angle given by the angular resolution used. If an angular resolution of 180 is

used, each shift will correspond to a rotation of 2 degrees in the iris region. This method is

suggested by Daugman , and corrects for misalignments in the normalised iris pattern caused

by rotational differences during imaging. From the calculated Hamming distance values, only

the lowest is taken, since this corresponds to the best match between two templates. The

number of bits moved during each shift is given by two times the number of filters used,

since each filter will generate two bits of information from one pixel of the normalised

region. The actual number of shifts required to normalise rotational

inconsistencies will be determined by the maximum angle difference between two images of

the same eye, and one shift is defined as one shift to the left, followed by one shift to the

right. The shifting process for one shift is illustrated in Figure below.

Dept. Of Electronics and Communication Engineering, VKIT Page 15

Page 16: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Fig: An illustration of the shifting process. One shift is defined as one shift left, and one shift

right of a reference template. In this example one filter is used to encode the templates, so

only two bits are moved during a shift. The lowest Hamming distance, in this case zero, is

then used since this corresponds to the best match between the two templates.

2.2.4Accuracy

The Iris Code constructed from these Complex measurements provides such a tremendous

wealth of data that iris recognition offers level of accuracy orders of magnitude higher than

biometrics. Some statistical representations of the accuracy follow:

Dept. Of Electronics and Communication Engineering, VKIT Page 16

Page 17: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

The odds of two different irises returning a 75% match (i.e. having Hamming

Distance of 0.25): 1 in 10 16.

Equal Error Rate (the point at which the likelihood of a false accept and false reject

are the same): 1 in 12 million.

The odds of two different irises returning identical Iris Codes: 1 in 1052

Other numerical derivations demonstrate the unique robustness of these algorithms. A

person’s right and left eyes have a statistically insignificant increase in similarity: 0.0048 on a

0.5 mean. This serves to demonstrate the hypothesis that iris shape and characteristic are

phenotype - not entirely; determined by genetic structure. The algorithm can also account for

the iris: even if 2/3 of the iris were completely obscured, accurate measure of the remaining

third would result in an equal error rate of 1 in 100000.

Iris recognition can also accounts for those ongoing changes to the eye and iris which

are defining aspects of living tissue. The pupil’s expansion and contraction, a constant

process separate from its response to light, skews and stretches the iris. The algorithms

account for such alteration after having located at the boundaries of the iris. Dr. Daugman

draws the analogy to a ‘homogenous rubber sheet” which, despite its distortion retains certain

consistent qualities. Regardless of the size of the iris at any given time, the algorithm draws

on the same amount or data, and its resultant Iris Code is stored as a 512 byte template. A

question asked of all biometrics is there is then ability to determine fraudulent samples. Iris

recognition can account for this in several ways the detection of pupillary changes, reflections

from the cornea detection of contact lenses atop the cornea and use of infrared illumination to

determine the state of the sample eye tissue.

Dept. Of Electronics and Communication Engineering, VKIT Page 17

Page 18: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

2.2.5 Decision Envirnoment

FIGURE 7. DECISION ENVIRONMENT

The performance of any biometric identification scheme is characterized by its

“Decision Environment’. This is a graph superimposing the two fundamental histograms of

similarity that the test generates: one when comparing biometric measurements from the

SAME person (different times, environments, or conditions), and the other when comparing

measurements from DIFFERENT persons. When the biometric template of a presenting

person is compared to a previously enrolled database of templates to determine the Persons

identity, a criterion threshold (which may be adaptive) is applied to each similarly score.

Because this determines whether any two templates are deemed to be “same” or “different”,

the two fundamental distributions should ideally be well separated as any overlap between

them causes decision errors.

One metric for “decidability”, or decision—making power, is d. This is defined as the

separation between the means of two distributions, divided by the square root of their average

variance. One advantage of using d’ for comparing, the decision-making power of biometrics

is the fact that it does not depend on any choice about the decision threshold used. Which of

course may vary from liberal to conservative when selecting the trade-off between the False

Dept. Of Electronics and Communication Engineering, VKIT Page 18

Page 19: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Accept Rate (FAR) and False Reject Rate (FRR)? The d’ metric is a measure of the inherent

degree to which any decrease in one error rate must be paid for by an increase in the other

error rate, when decision thresholds are varied. It reflects the intrinsic separability of the two

distributions.

Decidability metrics such as d’ can he applied regardless of what measure of

similarity a biometric uses. In the particular case of iris recognition, the similarly measure is a

Hamming distance: the fraction of bits in two iris Codes that disagree. The distribution on the

left in the graph shows the result when different images of the same eye are compared:

typically about 10% of the bits may differ. But when Iris Codes from different eyes are

compared: With rations to look for and retain the best match. The distribution on the right is

the result. The fraction of disagreeing bits is very tightly packed around 45%. Because of the

narrowness of this right-hand distribution, which belongs to the family of binomial extreme-

value distributions, it is possible to make identification decisions with astronomical levels of

confidence. For example, the odds of two different irises agreeing just by chance in more

than 75 of their Iris Code bits, is only one in 10-to-the- 14 th power. These extremely low

probabilities of getting a False Match enable the iris recognition algorithms to search through

extremely large databases, even of a national or planetary scale, without confusing one Iris

Code for another despite so many error opportunities.

Dept. Of Electronics and Communication Engineering, VKIT Page 19

Page 20: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Chapter 3:Uniqness of Iris Codes

The first test was to confirm the uniqueness of iris patterns. Testing the uniqueness of

iris patterns is important, since recognition relies on iris patterns from different eyes being

entirely independent, with failure of a test of statistical independence resulting in a match.

Uniqueness was determined by comparing templates generated from different eyes to each

other, and examining the distribution of Hamming distance values produced. This distribution

is known as the inter-class distribution. According to statistical theory, the mean Hamming

distance for comparisons between inter-class iris templates will be 0.5. This is because, if

truly independent, the bits in each template can be thought of as being randomly set, so there

is a 50% chance of being set to 0 and a 50% chance of being set to 1. Therefore, half of the

bits will agree between two templates, and half will disagree, resulting in a Hamming

distance of 0.5. The templates are shifted left and right to account for rotational

inconsistencies in the eye image, and the lowest Hamming distance is taken as the actual

Hamming distance. Due to this, the mean Hamming distance for inter-class template

comparisons will be slightly lower than 0.5, since the lowest Hamming distance out of

several comparisons between shifted templates is taken. As the number of shifts increases,

the mean Hamming distance for inter-class comparisons will decrease accordingly.

Uniqueness was also be determined by measuring the number of degrees of freedom

represented by the templates. This gives a measure of the complexity of iris patterns, and can

be calculated by approximating the collection of inter-class Hamming distance values as a

binomial distribution. The number of degrees of freedom, DOF, can be calculated by:

where p is the mean, and σ is the standard deviation of the distribution.

Dept. Of Electronics and Communication Engineering, VKIT Page 20

Page 21: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Chapter4:Binomial Distribution of Hamming codes

The Histogram given below shows the outcomes of 2,307,025 comparisons between

different pairs of irises. For each pair comparison, the percentage of their Iris Code bits that

disagreed was computed and tallied as a fraction. Because of the zero-mean property of the

wavelet demodulators, the computed coding bits are equally likely to be 1 or 0. Thus when

any corresponding bits of two different Iris Codes are compared, each of the four

combinations (00), (01), (10), (11) has equal probability. In two of these cases the bits agree,

and in the other two they disagree. Therefore one would expect on average 50% of the bits

between two different Iris Codes to agree by chance. The above histogram presenting

comparisons between 2.3 million different pairings of irises shows a mean fraction of 0.499

of their Iris Code bits agreeing by chance.

FIGURE 9. BINOMIAL DISTRIBUTION OF IRISCODE HAMMING DISTANCES

The standard deviation of this distribution, 0.032, reveals the effective number of

independent bits (binary degrees of freedom) when Iris Codes are compared. Because of

correlations within irises and within computed Iris Codes, the number of degrees of freedom

is considerably smaller than the number of bits computed. But even correlated Bernoulli trials

(coin tosses) generate binomial distributions; the effect of their correlations is equivalent to

reducing the effective number of Bernoulli trials. For comparisons between different pairs of

Iris Codes, the distribution shown above corresponds to that for the fraction of "heads" that

one would get in runs of 244 tosses of a fair coin. This is a binomial distribution, with

Dept. Of Electronics and Communication Engineering, VKIT Page 21

Page 22: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

parameters p=q=0.5 and N=244 Bernoulli trials (coin tosses). The solid curve in the above

histogram is a plot of such a binomial probability distribution. It gives an extremely exact fit

to the observed distribution, as may be seen by comparing the solid curve to the data

histogram.

The above two aspects show that Hamming Distance comparisons between different

Iris Codes are binomially distributed, with 244 degrees of freedom. The important corollary

of this conclusion is that the tails of such distributions are dominated by factorial

combinatorial factors, which attenuate at astronomic rates. This property makes it extremely

improbable that two different Iris Codes might happen to agree just by chance in, say, more

than 2/3rds of their bits (making a Hamming Distance below 0.33 in the above plot). The

confidence levels against such an occurrence are the reason why iris recognition can afford to

search extremely large databases, even on a national scale, with negligible probability of

making even a single false match.

Dept. Of Electronics and Communication Engineering, VKIT Page 22

Page 23: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Chapter6: Commensurability Of Iris codes ,

Advantages, Disadvantages

A critical feature of this coding approach is the achievement of commensurability

among iris codes, by mapping all irises into a representation having universal format and

constant length, regardless of the apparent amount of iris detail. In the absence of

commensurability among the codes, one would be faced with the inevitable problem of

comparing long codes with short codes, showing partial agreement and partial disagreement

in their lists of features. It is not obvious mathematically how one would make objective

decisions and compute confidence levels on a rigorous basis in such a situation. This

difficulty has hampered efforts to automate reliably the recognition of fingerprints.

Commensurability facilitates and objectifies the code comparison process, as well as the

computation of confidence.

5.1Adavntages

Highly protected, internal organ of the eye

Externally visible; patterns imaged from a distance

Iris patterns possess a high degree of randomness

o Variability: 244 degrees-of-freedom

o Entropy: 3.2 bits per square-millimeter

o Uniqueness: set by combinatorial complexity

Changing pupil size confirms natural physiology

Pre-natal morphogenesis (7th month of gestation)

Limited genetic penetrance of iris patterns

Patterns apparently stable throughout life

Encoding and decision-making are tractable

Image analysis and encoding time: 1 second

Decidability index (d-prime): d' = 7.3 to 11.4

Search speed: 100,000 Iris Codes per second

Dept. Of Electronics and Communication Engineering, VKIT Page 23

Page 24: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

5.Disadvantages of Iris Scanning

Small target (1 cm) to acquire from a distance (1m)

Located behind a curved, wet, reflecting surface

Obscured by eyelashes, lenses, reflections

Partially occluded by eyelids, often drooping

Deforms non-elastically as pupil changes size

Illumination should not be visible or bright

Some negative connotations

Iris scanning is a relatively new technology and is incompatible with the very

substantial investment that the law enforcement and immigration authorities of

some countries have already made into fingerprint recognition.

Iris recognition is very difficult to perform at a distance larger than a few meters

and if the person to be identified is not cooperating by holding the head still and

looking into the camera.

As with other photographic biometric technologies, iris recognition is susceptible

to poor image quality, with associated failure to enroll rates

As with other identification infrastructure (national residents databases, ID cards,

etc.), civil rights activists have voiced concerns that iris-recognition technology

might help governments to track individuals beyond their will.

Dept. Of Electronics and Communication Engineering, VKIT Page 24

Page 25: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Chapter6: Applications

Iris-based identification and verification technology has gained acceptance in a

number of different areas. Application of iris recognition technology can he limited only by

imagination. The important applications are those following:

ATM’s and iris recognition: in U.S many banks incorporated iris recognition

technology into ATM’s for the purpose of controlling access to one’s bank

accounts. After enrolling once (a “30 second” process), the customer need only

approach the ATM, follow the instruction to look at the camera, and be recognized

within 2-4 seconds. The benefits of such a system are that the customer who

chooses to use bank’s ATM with iris recognition will have a quicker, more secure

transaction.

Applications of this type are well suited to iris recognition technology. First, being

fairly large, iris recognition physical security devices are easily integrated into the mountable,

sturdy apparatuses needed or access control, The technology’s phenomenal accuracy can be

relied upon to prevent unauthorized release or transfer and to identify repeat offenders re-

entering prison under a different identity.

Computer login: The iris as a living password.

National Border Controls: The iris as a living password.

Telephone call charging without cash, cards or PIN numbers.

Ticket less air travel.

Premises access control (home, office, laboratory etc.).

Driving licenses and other personal certificates.

Dept. Of Electronics and Communication Engineering, VKIT Page 25

Page 26: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

Entitlements and benefits authentication.

Forensics, birth certificates, tracking missing or wanted person

Credit-card authentication.

Automobile ignition and unlocking; anti-theft devices.

Anti-terrorism (e.g.:— suspect Screening at airports)

Secure financial transaction (e-commerce, banking).

Internet security, control of access to privileged information.

“Biometric—key Cryptography “for encrypting/decrypting messages.

Chapter7: Iris Recognition Issues

Every biometric technology has its own challenges. When reviewing test results, it is

essential to consider the environment and protocols of the test. Much industry testing is

performed in laboratory settings on images acquired in ideal conditions. Performance in a real

world application may result in very different performance as there is a learning curve for

would-be user of the system and not every candidate will enroll properly or quickly the first

time. There are some issues which affect the functionality and applicability of iris recognition

technology in particular.

The technology requires a certain amount of user interaction the enroller must hold

still in a certain spot, even if only momentarily. It would be very difficult to enroll or identify

a non-cooperative subject. The eye has to have a certain degree of lighting to allow the

camera to capture the iris; any unusual lighting situation may affect the ability of the camera

to acquire its subject. Lastly, as with any biometric, a backup plan must be in place if the unit

becomes inoperable. Network crashes, power failure, hardware and software problems are but

a few of the possible ways in which a biometric system would become unusable. Since iris

technology is designed to be an identification technology, the fallback procedures may not be

as fully developed as in a recognition schematic. Though these issues do not reduce the

exceptional effectiveness of iris recognition technology, they must be kept in mind, should a

company decide to implement on iris-based solution.

Dept. Of Electronics and Communication Engineering, VKIT Page 26

Page 27: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

CONCLUSION

The technical performance capability of the iris recognition process far surpasses that

of any biometric technology now available. Iridian process is defined for rapid exhaustive

search for very large databases: distinctive capability required for authentication today. The

extremely low probabilities of getting a false match enable the iris recognition algorithms to

search through extremely large databases, even of a national or planetary scale. As iris

technology grows less expensive, it could very likely unseat a large portion of the biometric

industry, e-commerce included; its technological superiority has already allowed it to make

significant inroads into identification and security venues which had been dominated by other

biometrics. Iris-based biometric technology has always been an exceptionally accurate one,

and it may soon grow much more prominent.

Dept. Of Electronics and Communication Engineering, VKIT Page 27

Page 28: Iris Scanning Seminar Report

IRIS RECOGNITION SEMINAR REPORT 2011

REFERENCES

1. Daugman J (1999) "Wavelet demodulation codes, statistical independence, and

pattern recognition." Institute of Mathematics and its Applications, Proc.2nd IMA-

IP. London: Albion, pp 244 - 260.

2. Daugman J (1999) "Biometric decision landscapes." Technical Report No TR482,

University of Cambridge Computer Laboratory.

3. Daugman J and Downing C J (1995) "Demodulation, predictive coding, and spatial

vision." Journal of the Optical Society of America A, vol. 12, no. 4, pp 641 - 660.

4. Daugman J (1993) "High confidence visual recognition of persons by a test of

statistical independence." IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 15, no. 11, pp 1148 - 1160.

5. Daugman J (1985) "Uncertainty relation for resolution in space, spatial frequency, and

orientation optimized by two-dimensional visual cortical filters." Journal of the

Optical Society of America A, vol. 2, no. 7, pp 1160 - 1169.

6. . Sanderson, J. Erbetta. Authentication for secure environments based on iris scanning

technology. IEE Colloquium on Visual Biometrics, 2000.

7. J. Daugman. How iris recognition works. Proceedings of 2002 International

Conference on Image Processing, Vol. 1, 2002

8. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride. A

system for automated iris recognition. Proceedings IEEE Workshop on Applications

of Computer Vision, Sarasota, FL, pp. 121-128, 1994.

9. Y. Zhu, T. Tan, Y. Wang. Biometric personal identification based on iris patterns.

Proceedings of the 15th International Conference on Pattern Recognition, Spain, Vol.

2, 2000.

SOURCE LINKS

www. irisscan.com

www.c1.cam.ac.Uk/users/jgd1000

www.sensar.com

Dept. Of Electronics and Communication Engineering, VKIT Page 28