on the use of multispectral conjunctival vasculature...

8
On the use of multispectral conjunctival vasculature as a soft biometric Simona Crihalmeanu West Virginia University Morgantown WV 26506, USA [email protected] Arun Ross West Virginia University Morgantown WV 26506, USA [email protected] Abstract Ocular biometrics has made signicant progress over the past decade primarily due to advances in iris recog- nition. Initial research in the eld of iris recognition fo- cused on the acquisition and processing of frontal irides which may require considerable subject cooperation. How- ever, when the iris is off-angle with respect to the acqui- sition device, the sclera (the white part of the eye) is ex- posed. The sclera is covered by a thin transparent layer called conjunctiva. Both the episclera and conjunctiva con- tain blood vessels that are observable from the outside. In this work, these blood vessels are referred to as conjunctival vasculature. Iris patterns are better observed in the near infrared spectrum while conjunctival vasculature is better seen in the visible spectrum. Therefore, multispectral (i.e., color-infrared) images of the eye are acquired to allow for the combination of the iris biometric with the conjunctival vasculature. The paper focuses on conjunctival vasculature enhancement, registration and matching. Initial results are promising and suggest the need for further investigation of this biometric in a bimodal conguration with iris. 1. Introduction Ocular biometrics has made signicant progress over the past decade primarily due to advances in iris recognition [9][1]. The iris has been demonstrated to be a reliable bio- metric with high variability across individuals when imaged in the near-infrared (NIR) spectrum [3]. Initial research fo- cused on the acquisition and processing of frontal iris im- ages. However, when the iris is off-angle with respect to the acquisition device, the sclera (also known as the white of the eye) is exposed. The sclera is the external layer of the eye, and is a rm dense membrane comprising of a white and opaque brin connective tissue, organized in many bands of parallel and interlacing brous tissue bundles. Its outer Thanks to Dr. Reza Derakhshani for the useful discussions and to Peter Hein for assisting us with the data collection. This work was partially funded by the Center for Identication Technology Research (CITeR). surface, called episclera, contains the blood vessels nour- ishing the sclera. The anterior part of the sclera is cov- ered by the conjunctival membrane, a thin layer that helps lubricate the eye for eyelid closure. The rich vasculature revealed in the episclera and conjunctival membrane is re- ferred to as conjunctival vasculature in this paper. Previous studies [4][2] investigated the feasibility of using the con- junctival vasculature patterns, imaged in the visible spec- trum, as a biometric. Iris patterns are better observed in the NIR spectrum while the vasculature patterns are better ob- served in the visible spectrum (RGB). Therefore, multispec- tral images of the eye can be potentially used for combin- ing the iris biometric with the conjunctival vasculature for improved recognition. This paper focuses on multispectral conjunctival vasculature enhancement and matching. The conjunctival vasculature is viewed as a soft biometric since it is expected to have limited discrimination capability com- pared to the iris. The paper is organized as follows: section 2 describes multispectral data acquisition; section 3 presents the pro- cess of image denoising; section 4 describes specular re- ection detection and removal; section 5 discusses sclera segmentation; section 6 describes conjunctival vasculature enhancement; section 7 describes the registration of the im- ages; section 8 presents experimental results evaluating the recognition accuracy. The block diagram of the proposed system is shown in Figure 1. 2. Image acquisition 2.1. Multispectral Imaging Multispectral imaging captures the image of an object at multiple spectral bands often ranging from the visible spec- tra to the infra-red spectra. The visible spectral band [5] comprises of three narrow sub-bands called the red, green and blue channels that range from 0.4μm to 0.7μm. The infrared spectrum is divided into NIR (near-infrared), MIR (midwave infrared), FIR (far infrared) and thermal bands, ranging from 0.7μm to over 10μm. 204 978-1-4244-9497-2/10/$26.00 ©2010 IEEE Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

Upload: others

Post on 25-Apr-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

On the use of multispectral conjunctival vasculature as a soft biometric∗

Simona CrihalmeanuWest Virginia University

Morgantown WV 26506, [email protected]

Arun RossWest Virginia University

Morgantown WV 26506, [email protected]

Abstract

Ocular biometrics has made significant progress overthe past decade primarily due to advances in iris recog-nition. Initial research in the field of iris recognition fo-cused on the acquisition and processing of frontal irideswhich may require considerable subject cooperation. How-ever, when the iris is off-angle with respect to the acqui-sition device, the sclera (the white part of the eye) is ex-posed. The sclera is covered by a thin transparent layercalled conjunctiva. Both the episclera and conjunctiva con-tain blood vessels that are observable from the outside. Inthis work, these blood vessels are referred to as conjunctivalvasculature. Iris patterns are better observed in the nearinfrared spectrum while conjunctival vasculature is betterseen in the visible spectrum. Therefore, multispectral (i.e.,color-infrared) images of the eye are acquired to allow forthe combination of the iris biometric with the conjunctivalvasculature. The paper focuses on conjunctival vasculatureenhancement, registration and matching. Initial results arepromising and suggest the need for further investigation ofthis biometric in a bimodal configuration with iris.

1. IntroductionOcular biometrics has made significant progress over the

past decade primarily due to advances in iris recognition[9][1]. The iris has been demonstrated to be a reliable bio-metric with high variability across individuals when imagedin the near-infrared (NIR) spectrum [3]. Initial research fo-cused on the acquisition and processing of frontal iris im-ages. However, when the iris is off-angle with respect to theacquisition device, the sclera (also known as the white of theeye) is exposed. The sclera is the external layer of the eye,and is a firm dense membrane comprising of a white andopaque fibrin connective tissue, organized in many bandsof parallel and interlacing fibrous tissue bundles. Its outer

∗Thanks to Dr. Reza Derakhshani for the useful discussions and toPeter Hein for assisting us with the data collection. This work was partiallyfunded by the Center for Identification Technology Research (CITeR).

surface, called episclera, contains the blood vessels nour-ishing the sclera. The anterior part of the sclera is cov-ered by the conjunctival membrane, a thin layer that helpslubricate the eye for eyelid closure. The rich vasculaturerevealed in the episclera and conjunctival membrane is re-ferred to as conjunctival vasculature in this paper. Previousstudies [4][2] investigated the feasibility of using the con-junctival vasculature patterns, imaged in the visible spec-trum, as a biometric. Iris patterns are better observed in theNIR spectrum while the vasculature patterns are better ob-served in the visible spectrum (RGB). Therefore, multispec-tral images of the eye can be potentially used for combin-ing the iris biometric with the conjunctival vasculature forimproved recognition. This paper focuses on multispectralconjunctival vasculature enhancement and matching. Theconjunctival vasculature is viewed as a soft biometric sinceit is expected to have limited discrimination capability com-pared to the iris.

The paper is organized as follows: section 2 describesmultispectral data acquisition; section 3 presents the pro-cess of image denoising; section 4 describes specular re-flection detection and removal; section 5 discusses sclerasegmentation; section 6 describes conjunctival vasculatureenhancement; section 7 describes the registration of the im-ages; section 8 presents experimental results evaluating therecognition accuracy. The block diagram of the proposedsystem is shown in Figure 1.

2. Image acquisition

2.1. Multispectral Imaging

Multispectral imaging captures the image of an object atmultiple spectral bands often ranging from the visible spec-tra to the infra-red spectra. The visible spectral band [5]comprises of three narrow sub-bands called the red, greenand blue channels that range from 0.4μm to 0.7μm. Theinfrared spectrum is divided into NIR (near-infrared), MIR(midwave infrared), FIR (far infrared) and thermal bands,ranging from 0.7μm to over 10μm.

204978-1-4244-9497-2/10/$26.00 ©2010 IEEE

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

Figure 1. Block diagram showing the pre-processing and matchingof multispectral conjunctival vasculature

BLUE�

Trim filters�

Monochrome�CCD Sensors�

Color�CCD Sensor�

RED�

INFRARED�

GREEN�&�

(a)

(b) (c) (d) (e)

(f) (g) (h) (i)Figure 2. a) The DuncanTech MS3100 camera: CIR/RGBSpectral Configuration (Adapted from Hi-Tech Electronics:www.hitech.com.sg). (b) Color-infrared image (NIR-Red-BayerPattern). (c) NIR component. (d) Red component. (e) Bayerpattern. (f) RGB image. (g) Green component. (h) Blue compo-nent. (i) Composite image (NIR-Red-Green)

2.2. Multispectral acquisition system

Images of the eye are collected using the Redlake (Dun-canTech) MS3100 multispectral camera.1 The camera hasthree array sensors. Between the lenses and the sensorsthere is a color-separating prism to split the ingoing broad-band light into three optical channels. Figure 2(a) displaysthe configuration of the multispectral camera.

The camera acquires imagery of four spectral bands froma three channel optical system. As specified by the sensormanufacturer, the center wavelength of each spectral bandis as follows: blue - 460 nm, green - 540 nm, red - 660 nmand NIR - 800 nm. The CIR (Color InfraRed)/RGB config-uration outputs three channels represented as a 2D matrix ofpixels that are stacked on top of each other along the 3rd di-

1Hi-Tech Electronics, Spectral Configuration Guide for DuncanTech3-CCD Cameras, http://www.hitech.com.sg

mension; the three channels correspond to the near-infrared(NIR) component, red component, and a Bayer mosaic-likepattern in which red pixels on the color array are ignored.Figure 2(b)-(i) shows an example of a CIR image along withits components. The first channel - the NIR component - isstored as a separate image. The second channel - the redcomponent - is stored as the red component of the RGBimage. The green and blue components are obtained fromthe third channel of the CIR/RGB configuration through aBayer pattern demosaicing algorithm. The system used tocollect the multispectral images is composed of an ophthal-mologist’s slit-lamp mount and a light source. The mountconsists of a rest chin to position the head and a mobile armto which the multispectral camera is attached so that it canbe easily manipulated to focus on the white of the eye, whilethe person is gazing to the left or to the right. The lightsource illuminates the eye using a spectral range from 350nm to 1700 nm, and is projected onto the eye via an opticfiber guide with a ring light attached to its end. Because ofthe reflective qualities of the eyeball, pointing a light sourcedirectly at the subject’s eye creates a glare on the sclera. Theissue is resolved by directing the light source such that theincoming rays to the eyeball are approximately perpendic-ular to the pupil region. This is not always possible due tosubtle movements of the eyeball. Thus, glare is not alwayscontained within the pupil region and may overlap with theiris.The multispectral camera generates images of size1040x1392x3 pixels from which the first 17 columns areremoved due to artifacts. The final size of the images is,therefore, 1035x1373x3. Videos of the right and left eyeare captured from 49 subjects, with each eye gazing to theright or to the left. 8 images per eye per gaze direction is se-lected from the video. The total number of images is 1,536.For two subjects, only data from the right eye was collecteddue to medical issues. Working with images from the samevideo allows us to bypass some of the challenges encoun-tered by Crihalmeanu et al. [2] primarily due to viewingangle. The process of frame selection ensures that there isno remarkable change in pose. Our multispectral collec-tion contains images of the eye with different iris colors.Based on the Martin-Schultz scale 2 often used in physi-cal anthropology, the eyes are classified as light eyes (blue,green gray), mixed eyes (blue, gray or green with brownpigment) and dark eyes (brown, dark brown, almost black).In our work, we consider two categories: light eyes (lightand mixed eyes from Martin-Schultz scale) and dark eyes.Examples of the acquired images are displayed in Figure 3.

2http://wapedia.mobi/en/Eye color

205

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

Figure 3. Example of Color-Infrared (CIR) images

2.3. From Bayer mosaic pattern to RGBThe Bayer-like pattern is due to the placement of a grid

of tiny color filters on the face of the CCD sensor array tofilter the light so that only one of the colors (red, blue orgreen) reaches any given pixel. Here, 25% of the pixelsare assigned to blue, 25% to red and 50% to green. Blueand green components are obtained from the Bayer mosaicpattern through interpolation 3. Figure 2(i) shows a NIR-Red-Green composite image.

3. Image denoisingThe red, green, blue and NIR components obtained from

the CIR images are in general noisy (Figure 4(a)(c)(e)(g)).The denoising algorithm employed is based on a wavelettransformation. A double-density complex discrete wavelettransform, (DDCDWT) [11] which combines the charac-teristics and the properties of the double-density discretewavelet transform (DDDWT) [10] and the dual-tree discretewavelet transform (DTDWT) [12], is used. The transfor-mation is based on two scaling functions and four distinctwavelets such that one pair of wavelets form an approximateHilbert transform pair and the other pair of wavelets are off-set from one other by one half. It is implemented by apply-ing four 2-D double density discrete wavelet transforms inparallel to the input data with different filter sets for rowsand columns, yielding 32 oriented wavelets (Figure 5(a))along one of six angles at ±15,±45,±75 degrees.4 Themethod is shift-invariant, possesses improved directionalselectivity and is based on FIR perfect reconstruction filterbanks as illustrated in Figure 5(b). For all scales and sub-bands, the magnitudes of the complex wavelet coefficientsare processed by soft thresholding that sets the coefficientswith values less than a threshold to zero and subtracts thethreshold values from the non-zero coefficients. Originaland denoised red, green, blue and NIR images are presentedin Figure 4. Visual differences are not pronounced due toimage rescaling.

4. Specular reflectionSpecular reflections have to be detected and removed for

a better segmentation of the sclera (described in Section 5).As stated in Section 2.2, the data acquisition system usesa light source to illuminate the eye region. The light di-

3RGB “Bayer” Color and MicroLenses,http://www.siliconimaging.com/RGB Bayer.htm

4http://taco.poly.edu/selesi/DoubleSoftware/index.html

(a) (b) (c) (d)

(e) (f) (g) (h)Figure 4. Denoising with Double Density Complex DiscreteWavelet Transform. a) Original NIR. b) Denoised NIR. c) Origi-nal red component. d) Denoised red component. e) Original greencomponent. f) Denoised green component. g) Original blue com-ponent. h) Denoised blue component. Visual differences are notpronounced due to image rescaling

(a)

2�

2�

2�

H0�

H1�

H2�

G0�

G1�

G2�

2�

2�

2�

H0�

H1�

H2�

2�

2�

2�

H0�

H1�

H2�

2�

2�

2�

2�

2�

2�

G0�

G1�

G2�

2�

2�

2�

G0�

G1�

G2�

(b)Figure 5. (a) Plot of Complex 2-D Double-Density Dual-TreeWavelets. (b) Iterated filterbank for the Double-Density ComplexDiscrete Wavelet Transform [12]

rected to the eyeball generates specular reflection that has aring-like shape, caused by the shape of the source of illumi-nation, and highlights, due to the humidity of the eye andthe curved shape of the eyeball. Both are detected and re-moved by a fast inpainting algorithm. In some images, thering-like shape may be an incomplete circle, ellipse, or anarbitrary curved shape with a wide range of intensity values.It may be located partially in the iris region, making its de-tection and removal more difficult especially since the iristexture has to be preserved as much as possible. The specu-lar reflections are detected using different threshold valuesfor each component: 0.60 for NIR, 0.50 for red and 0.80for green. Only regions with less then 3500 pixels in sizeare labeled as specular reflection and inpainted. In digitalinpainting, the information from the boundary of the regionto be inpainted is propagated smoothly inside the region.The value to be inpainted at a pixel is calculated using a

206

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

PDE equation 5 in which partial derivatives are replaced byfinite differences between the pixel and its eight neighbors.

5. Sclera region segmentationWhen the entire image of the eye is used for enhancing

the conjunctival vasculature, it is difficult to distinguish be-tween the different types of lines that appear in it: wrinkles,crows feet, eyelashes, blood vessels. Therefore, a good seg-mentation of the sclera region that clearly exhibits the bloodvessels is necessary. The method employed to segment thesclera region is inspired by the work done in the processingof LandSat imagery (Land + Satellite) [13]. Since water ab-sorbs NIR light, the corresponding regions appear dark inthe image. The segmentation is based on the fact that theskin has lesser water content than the sclera, and hence ex-hibits a higher reflectance in NIR. The algorithm to segmentthe sclera has three main stages as described below.

5.1. Coarse sclera segmentation1. Compute an index called the normalized sclera in-

dex NSI(x, y) = NIR(x,y)−G(x,y)NIR(x,y)+G(x,y) , where NIR(x, y) and

G(x, y) are the pixel intensities of the NIR and green com-ponents, respectively, at pixel location (x, y). The differ-ence NIR − G is larger for pixels pertaining to the scleraregion; it is then normalized to help compensate for the un-even illumination (Figures 6(b)(e) and 7(b)). In some cases,for improving the accuracy of segmentation, the intensityvalues of the NIR component can be adjusted with a γ cor-rection value between 0.65 to 0.9, such that the mapping tothe new intensity values is weighted toward higher outputvalues.

2. Locate sclera by thresholding the NSI image with thethreshold value η = 0.1. For the right eye of 4 subjects andthe left eye of 2 subjects, the segmentation results were bet-ter with η = 0.126. Figure 6(f) displays the scatter plot ofNIR intensity values versus the corresponding green inten-sity values for all pixels in the image. The pixels above thethreshold (η = 0.1) represent the background region whilethe rest represent the sclera region. Changing the value of ηwill modify the slope of the boundary line between the pix-els of the two segmented regions. The output of the thresh-olding operation is a binary image. Figures 6(c) and 7(c)display the segmented sclera region as the largest blue re-gion (pixels with value 1 in the binary image).7 For darkirides (brown and dark brown), the sclera region preclud-ing the iris is localized (Figure 6(g), referred henceforth as

5http://www.mathworks.com/matlabcentral/fileexchange/45516When collecting the data, the ring of light is positioned to the left or

right side of the eye to ensure uniform illumination across the eye. Thisintroduces some variability in the lighting across different eyes, and hencethe small differences in the threshold values

7MathWorks, Image Processing Toolbox, Finding Vegetation in a Mul-tispectral Image, http://www.mathworks.com/products/image/demos.html

(a) (b) (c) (d)

−1 −0.5 0 0.5 10

2

4

6

8x 10

4

Intensity values

Nu

mb

er

of

occu

ren

ces

(e) (f)

(g) (h) (i) (j)Figure 6. Sclera region segmentation. Example using a dark coloriris. a) Composite image (NIR,R,G). b) Normalized sclera index(NSI). c) Threshold applied to NSI. d) Segmented sclera regionIS . e) Histogram of the NSI. f) NIR vs green intensity values. g)Convex hull of the sclera region ISCH . h) Convex hull of the irisregion within the sclera region IIRCH . i) Sclera mask. j) Contourof sclera mask imposed on original composite image

(a) (b) (c) (d)

(e) (f) (g) (h)Figure 7. Sclera region segmentation. Example using a light coloriris. a) Composite image (NIR,R,G). b) Normalized sclera index(NSI). c) Threshold applied to NSI. d) Segmented sclera regionIS . e) Convex hull of the sclera region ISCH . f) Convex hull ofthe iris region within the sclera region IIRCH . g) Sclera mask. h)Contour of sclera mask imposed on original composite image

IS). Thus, in this case, further segmentation of the scleraand iris is not required. For light irides (blue, hazel, green),regions pertaining to both the sclera and iris are segmented(Figure 7(d), referred henceforth as IS). Here, further sep-aration of the sclera and iris is needed. The overlappingregion between the segmented sclera region as described inthis section and the segmented pupil region as will be de-scribed in Section 5.2, provides the criteria to differentiateautomatically between the two outcomes (corresponding tolight or dark iris).As seen in Figures 6(c) and 7(c), the location of the pupil isalso visible either as a blue region that does not overlap thesclera region (in dark irides) or as a green disk within thesclera region (in light irides). Because the extent of overlap

207

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

of the pupil on the segmented sclera depends on the color ofthe iris, this information can be exploited only if the colorof the iris is known in advance. Therefore, in Section 5.2we present an automatic way of finding the pupil locationregardless of the color of the iris.

3. Smooth the contour of the sclera region by findingits convex hull as shown in Figures 6(g) and 7(e): ISCH =ConvexHull(IS). For dark irides, this operation will includea portion of the iris region that has to be removed; thus, step4 is needed only for dark color irides. Since the proposedalgorithm has no prior information about the color of theiris, it is applied to all images irrespective of eye color.

4. Find the iris region included within the sclera regionusing the convex hull operator and select the largest con-nected region. Find its convex hull, IIRCH , as shown inFigures 6(h) and 7(f). For light irides this operation willdetermine the pupil location.

5. Find the sclera mask, shown in Figure 6(i), and Figure7(g) as the difference SMASK = ISCH − IIRCH . For darkirides, the result of this operation is the sclera mask, andfor light irides it is the sclera region without any part of thepupil region. Step 6 is needed for light irides and will haveno effect on dark irides.

6. Apply the morphological operation of filling the holesin SMASK (i.e., the black pixels surrounded by white pixelsare set to logical 1).

Figures 6(j) and 7(h) display the contour of the scleramask when imposed on the original composite image.

5.2. Pupil region segmentationThe pupil location is used only to determine if further

segmentation of the sclera and iris is needed. Hence, theaccurate determination of its boundary is not necessary. InNIR images, the pupil region is characterized by very lowintensity values and, by employing a simple threshold, thepupil region is obtained. However, this isolates the eye-lashes as well. In order to isolate only the pupil, the follow-ing steps are undertaken:

1. Geometrically resize the NIR component by a factorof 1/3 and apply power-law transformation [5] to its pixels:IPL = c ∗ Ix

I , where c = 1 is a constant, IPL is the outputimage, II is the input NIR image and x = 0.7.

2. Threshold IPL with a value of 0.1. The resultingbinary image, IBW , has the pupil and eyelashes denoted by1.

3. Find the contour of the sclera region as segmented inSection 5.1, ISCH .

4. Use Hough transform for line detection. Select andremove the highest peak corresponding to the longest line.

5. Fit an ellipse to the remaining sclera contour points,E(a, b, (x0, y0), θ), where a, b, (x0, y0) and θ correspond tothe length of the semi-major axes, length of the semi-minoraxes, the center of the ellipse, and its orientation, respec-

(a) (b) (c) (d)Figure 8. Pupil segmentation. a) Eye with dark color iris. b) Seg-mented pupil for dark color iris. c) Eye with light color iris. d)Segmented pupil for light color iris

tively. Define an elliptical mask (to detect the pupil region)to extract the pixels located within the ellipse.

6. Impose the ellipse mask on the binary image IBW

obtained in step 2. The result is a binary image that willcontain the pupil, and possibly eyelashes, as logical 1 pix-els, IP .

7. Count the number of connected objects N in IP . IfN > 1, through an iterative process, decrease the ellipse’ssemi-major and semi-minor axis (by 2%) and construct newelliptical masks that when imposed on the binary imageIBW will render a smaller value for N. The connected ob-ject for N = 1 will correspond to the location of the pupil.while N > 1 do

a = a − 2100 × a,

b = b − 2100 × b,

EMASK = E(a, b, (x0, y0), θ)IP = IBW ∩ EMASK

find N in IP

end while8. Fit a new ellipse E to the dilated region corresponding

to the location of the pupil. Compute IP = IBW ∩EMASK .Even if low intensity regions in the iris are inadvertentlyselected, the pupil region has by far the largest area amongall connected objects.

9. Fit an ellipse to the pixels pertaining to the pupil re-gion to find the pupil mask, PMASK . Resize the pupil maskto the original NIR image size.

The procedure described above is applied to all the im-ages regardless of the color of the iris. For 15 images, thealgorithm failed to correctly segment the pupil.

5.3. Sclera segmentation

As mentioned in Section 5.1, for light color iris images,further segmentation of the iris is needed. The criteria tofinalize the segmentation of the sclera is given by the inter-section of the pupil region, PMASK , as found in Section 5.2and the sclera region, SMASK , as found in Section 5.1.

5.3.1 Iris segmentation

We use the k-means clustering algorithm (k = 2) to segmentthe iris. The algorithm uses the pixels contained withinthe sclera mask SMASK (Figure 9(e)) as its input. Eachpixel is viewed as a three-dimensional entity consisting of

208

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

(a) (b) (c) (d)

(e) (f) (g)Figure 9. Iris segmentation for light color iris. a) NIR compo-nent. b) Red component. c) Proportion of sclera in north directionp↑(x, y). d) Proportion of sclera in south direction p↓(x, y). e)Convex hull of the sclera ISCH . f) K-means output. g) Segmentedsclera

(a) (b) (c) (d)Figure 10. Contour of the segmented sclera imposed on the greencomponent. a) and b) light color iris. c) and d) dark color iris

the intensity value of the NIR component (Figure 9(a)), in-tensity value of the red component (Figure 9(b)), and theproportion of sclera in the north p↑(x, y) and south direc-tions p↓(x, y) [7], as assessed in the red component (Figure9(c)(d)). The value of p(x, y) is set to 0 for all the pix-els outside the sclera region. For a pixel (x, y) inside thesclera region, the proportion of sclera in the north direc-tion, p↑(x, y), is computed as the mean of all the pixels ofcolumn y above (x, y), and the proportion of sclera in thesouth direction, p↓(x, y), is computed as the mean of all thepixels of column y below (x, y). Euclidean distances be-tween the origin of the coordinate system and the centroidof each cluster are computed in order to determine the la-bel of the two clusters (the label can be ‘sclera’ or ‘iris’).The largest distance is associated with the sclera cluster;this is the white region in Figure 9(f). The smallest distanceis associated with the iris cluster; this is the black regionin Figure 9(f). Two binary images, a mask for the scleraregion (Figure 9(f), pixel value 1) and a mask for the irisregion (Figure 9(f), pixel value 0), represent the output. Onexamining the two binary masks, we observe that in someimages, the k-means algorithm erroneously labels portion ofthe sclera as being the iris (mainly the corners of the sclerathat are less illuminated and have lower intensity values).To address this issue, we find that region in the iris maskthat overlaps with the pupil mask and subtract it from theoriginal sclera region ISCH .

6. Enhancement of blood vessels observed inthe sclera region

To improve segmentation of the blood vessel patterns,the segmented sclera image is pre-processed in two con-

secutive steps: image enhancement followed by line en-hancement. In the first step, the RGB image is converted tothe L*a*b color space. Contrast-limited adaptive histogramequalization (CLAHE) is applied to the luminance compo-nent L*. The algorithm divides the entire image into smallersquare tiles. Each tile is enhanced using histogram equaliza-tion. This induces artificial boundaries between tiles that areremoved using bilinear interpolation. The enhanced L*a*bimage is then converted back to the RGB color space. Anexamination of the three components of the RGB imagesuggests that the green component has the best contrast be-tween the blood vessels and the background. In the secondstep, a selective enhancement filter for lines as described in[8] is applied to the green component. The enhancementfilter for lines, and implicitly for blood vessels, is describedby the equation:

Iline(λ1, λ2) ={ |λ1| − |λ2|, if λ1 < 0;

0, if λ1 ≥ 0 (1)

where, λ1 and λ2 (with |λ1| > |λ2|) are the two eigen-values of the Hessian matrix of each pixel and computed asfollows: λ1 = K+

√(K2 − Q2), λ2 = K−√

(K2 − Q2),where K = (Ixx +Iyy)/2, Q =

√(Ixx ∗ Iyy − Ixy ∗ Iyx),

Ixx, Iyy ,Ixy and Iyx represent the second-order derivativesin x and y directions. The algorithm for blood vesselsenhancement is based on [8]:1. Determine the minimum (dmin) and maximum (dmax)diameter of the blood vessels.2. Consider N (=5) multiple 2D Gaussian distributionswith standard deviation, σ, within the interval [dmin/4,dmax/4]. σ = [0.25, 0.5, 1, 2, 3]3. Convolve each Gaussian distribution with the originalimage.4. Compute the two eigenvalues for each pixel, for each ofthe N convolved images.5. Using the eigenvalues, compute Iline.6. Multiply each pixel with the square of the correspondingGaussian standard deviation: Iline ∗ σ2.7. Compute the maximum value at each pixel based on theN outputs: Iout = argmax(Iline ∗ σ2).

7. RegistrationImage registration is the process of finding a transforma-

tion that aligns one image with another. The method usedhere, described in [6], models a local affine and a globalsmooth transformation. It also accounts for contrast andbrightness variations between the two images that are to beregistered. The registration between two images, the sourcef(x, y, t) and the target f(x, y, t − 1), is modeled by thetransformation �m = (m1, m2, m3, m4, m5, m6, m7, m8):

209

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

(a) (b)

(c)Figure 13. Matching. a) user1-image1. b) user1-image2. c) user2-image1. Match scores: a) with b) = 0.94952; a) with c) = 0.25611

(a) (b)Figure 11. Blood vessel enhancement. (a) Green component of thesegmented sclera. (b) The complement of enhanced blood vessels

(a) (b)

−200 −100 0 100 200

−100

−50

0

50

100

(c) (d)Figure 12. Image registration. a) Source image. b) Target image.c) Registered source. d) Flow image depicting the warping process

m7f(x, y, t) + m8 = f(m1x + m2y + m5, m3x + m4y +m6, t−1), where m1, m2, m3, and m4 are the linear affineparameters, m5, m6 are the translation parameters, andm7, m8 are the contrast and brightness parameters. A multi-scale approach is employed by using a Gaussian pyramid to

downsample the images to be registered. From a coarse-to-fine level, the transformation �m is determined globally ateach level, and then locally, and the estimated parametersare used to warp the source image. Figure 12 shows resultsof the registration on two pre-processed sclera images. Us-ing the linear affine parameters m1, m2, m3, and m4, andthe translation parameters m5, m6, the sclera mask of thesource image is also registered.

8. MatchingThe similarity between two sclera images is assessed us-

ing cross-correlation between regions of the sclera withinthe overlap region of the two sclera masks as shown in Fig-ure 13. To generate genuine scores, this cross-correlationis performed between pairs of images for each subject; togenerate the impostor scores, cross-correlation is performedbetween the first images of each subject pair. Results dis-played in Figure 14 using Receiver Operating Characteristic(ROC) curves indicate an EER of 0.0 for left-eye-looking-left, an EER of 0.3247 for left-eye-looking-right, an EERof 0.5128 for right-eye-looking-left and an EER of 0.9776for right-eye-looking-right. Initial analysis indicates that anEER of 0.0 for left-eye-looking-left was obtained becauseof the small number of subjects and the constraints imposedon data acquisition. The results also suggest that perfor-mance is improved considerably (see [2] for comparison)if (a) the variations in the pose and the amount of specu-lar reflection are reduced, and (b) the images are of higherresolution/quality.

To demonstrate the necessity of using the proposedpre-processing techniques, the d-prime value8 was com-puted for 4 scenarios on a small subset of 15 subjects(images 1, 4, 8 from the sequence left-eye-looking-left):registration and correlation (d-prime = 2.8058); imageenhancement, registration, and correlation (d-prime =3.5664); line enhancement, registration, and correlation(d-prime = 8.2698); image enhancement, line enhancement,registration, and correlation (d-prime = 8.1965). The lasttwo scenarios resulted in the best d-prime values therebysuggesting the importance of using the pre-processingtechniques.

9. Summary and Future workIn this paper we designed segmentation, enhancement

and matching routines for processing the conjunctival vas-culature of multispectral eye images pertaining to 49 sub-jects. The images used in this work were acquired under

8The d-prime value measures the separation between the genuine andimpostor score distributions. A higher value typically suggests a betterperformance

210

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011

0 0.5 1 1.50

0.5

1

1.5

False Accept Rate(%)

Fa

lse

Re

ject

Ra

te(%

)

EER = 0.0000%

0 0.2 0.4 0.6 0.8 10

5

10

15

Normalized score

% o

f sco

res

Distribution of scores

Imposter Distribution

Genuine Distribution

(a) (b)

0 0.5 1 1.50

0.5

1

1.5

False Accept Rate(%)

Fa

lse

Re

jec

t R

ate

(%)

EER = 0.3247%

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

12

14

Normalized score

% o

f sco

res

Distribution of scores

Imposter Distribution

Genuine Distribution

(c) (d)

0 0.5 1 1.50

0.5

1

1.5

False Accept Rate(%)

Fa

lse

Re

ject

Ra

te(%

)

EER = 0.5128%

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

Normalized score

% o

f sco

res

Distribution of scores

Imposter Distribution

Genuine Distribution

(e) (f)

0 0.5 1 1.50

0.5

1

1.5

EER = 0.9776%

False Accept Rate(%)

Fa

lse

Re

jec

t R

ate

(%)

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

12

14

16

Normalized score

% o

f sco

res

Distribution of scores

Imposter Distribution

Genuine Distribution

(g) (h)Figure 14. ROC and Normalized scores. a) and b) Left eye, look-ing left. c) and d) Left eye, looking right. e) and f) Right eye,looking left. g) and h) Right eye, looking right. The excellent per-formance can be attributed to the constraints imposed during dataacquisition. In operational scenarios, conjunctival vasculature maybe more suited as a soft biometric

several constraints to limit specular reflections and varia-tions in pose as much as possible. The use of multispec-tral imagery will be beneficial when combining iris patternswith conjunctival vasculature. Since iris patterns are betterresolved in the near-infrared spectrum and the conjuncti-

val patterns are better resolved in the visible spectrum, weare looking at ways to combine the conjunctival vasculaturewith the iris for enhanced biometric recognition. Further,we are designing schemes to address the problem of posevariation that can impact the performance of this soft bio-metric.

References[1] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn. Image

understanding for iris biometrics: A survey. Computer Visionand Image Understanding, 110(2):281 – 307, May 2008.

[2] S. Crihalmeanu, A. Ross, and R. Derakhshani. Enhancementand registration schemes for matching conjunctival vascula-ture. Proc. of International Conference on Biometrics, Al-ghero, Italy, pages 1240–1249, June 2-5 2009.

[3] J. Daugman. How iris recognition works. IEEE Transactionson Circuit and Systems for Video Technology, 14(1):21–30,2004.

[4] R. Derakhshani, A. Ross, and S. Crihalmeanu. A new bio-metric modality based on conjunctival vasculature. Proceed-ings of Artificial Neural Networks in Engineering, St.Louis,MO, November 2006.

[5] R. C. Gonzales and R. E. Woods. Digital image processing.2001. Prentice-Hall Inc., 2nd edition.

[6] S. Periaswamy and H. Farid. Elastic registration in the pres-ence of intensity variations. IEEE Transactions on MedicalImaging, 22(7):865–874, 2003.

[7] H. Proenca. Iris recognition: On the segmentation of de-graded images acquired in the visible wavelength. IEEETransaction on Pattern Analysis and Machine Intelligence,32(8):1502–1516, August 2010.

[8] L. Qiang, S. Shusuke, and D. Kunio. Selective enhancementfilters for nodules, vessels, and airway walls in two or threedimentional CT scans. Medical Physics, 30(8), 2003.

[9] A. Ross. Iris recognition: The path forward. IEEEComputer,pages 30–35, February 2010.

[10] I. W. Selenick. The double density DWT. pages 39–69, 2001.Chapter in “Wavelets in Signal And Image Analysis: fromTheory to Practice” (Eds: A. Petrosian and F. G. Meyer).

[11] I. W. Selenick. A new complex-directional wavelet transformand its application to image denoising. IEEE InternationalConference on Image Processing, 3:573–576, 2002.

[12] I. W. Selenick. The double-density dual-tree DWT. IEEETransactions on Signal Processing, 52(5):1304–1314, 2004.

[13] C. J. Tucker. Red and photographic infrared linear combi-nations for monitoring vegetation. Remote Sensing Environ-ment, 8:127–150, 1979.

211

Appeared in Proc. of IEEE Workshop on Applications of Computer Vision (WACV), (Kona, USA), January 2011