irjet-application of geodesic active contours in iris segmentation

8
INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056 VOLUME: 02 ISSUE: 01 | APR-2015 WWW.IRJET.NET P-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 121 Application of Geodesic Active Contours in iris Segmentation Kapil Rathor 1 1 Assistant professor, EXTC, St. John College of Engineering and Technology, Palghar, Maharashtra, India ----------------------------------------------------------------***-------------------------------------------------------------- AbstractA biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is the most reliable and accurate biometric identification system. The richness and apparent stability of the iris texture make it a robust biometric trait for personal authentication. The performance of an automated iris recognition system is affected by the accuracy of the segmentation process used to localize the iris. Most iris recognition systems consist of an automatic segmentation system that is based on the Hough transform. These systems localize the circular iris and pupil region. However, it is difficult to segment iris images acquired under nonideal conditions using such conic models. In this paper, a novel iris segmentation scheme employing geodesic active contours (GACs) to extract the iris from the surrounding structures is described. The proposed scheme elicits the iris texture in an iterative fashion and is guided by both local and global properties of the image. Index Terms Geodesic active contours (GACs), iris codes, iris recognition, iris segmentation, level sets. 1. INTRODUCTION With increase in terrorism and illegal acts, there is a growing demand for more secure and reliable identification in our society that can replace the traditional means of identification. Biometric technologies, based on recognition of humans based on behavioral or physiological characteristics, promises to be an effective solution. Biometric recognition can be described as automated methods to accurately recognize individuals based on distinguishing physiological and/or behavioral traits. It is a subset of the broader field of the science of human identification. Biometrics offers the means to identify individuals without requiring that they carry ID cards and badges or memorize passwords. Examples of biometric technologies include fingerprint recognition, face recognition, iris recognition and many others. The parts of body and behavior have been used for years as a means of person recognition and authentication. For example, fingerprint has been used for a long time in security and access applications. In comparison to other biometric features such as face, fingerprint, retina, and hand geometry, iris is seen as a highly reliable biometric technology because of its stability, and high degree of variation between individuals. The iris is seen as a highly reliable and accurate biometric technology because each human being is characterized by unique irises that remain relatively stable over the life period. Iris is present in the form of ring around pupil of a human eye in all the human beings. Its complex pattern contains many distinctive features such as arching ligaments, crypts, radial furrows, pigment frill, Pupillary area, ciliary area, rings, corona, freckles and zigzag collarette [1][2] which gives a unique set of feature for each human being, even irises of identical twins are different. Surface of the iris is composed of two regions, the central Pupillary zone and the outer ciliary zone. The collarette is the border between these two regions. The collarette region is less sensitive to the pupil dilation and usually unaffected by the eyelashes and eyelids. [3] 1.1 Features of the human iris Now some of the visible features of the human iris will be described, which are important to identify a person, especially pigment related features, features controlling the size of the pupil, visible rare anomalies, pupil, pigment frill and collarette. The crypts, in the figure 1 shown as number 5, are the areas in which the iris is relatively thin. They have very dark colour due to dark colour of the posterior layer. They appear near the collarette, or on the periphery of the iris. They look like sharply demarcated excavations. The pigment spots, in the figure 1 shown as number 6, are random concentrations of pigment cells in the visible surface of the iris and generally appear in the ciliary area. They are known as moles and freckles with nearly black colour.

Upload: irjet

Post on 24-Sep-2015

10 views

Category:

Documents


1 download

DESCRIPTION

A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is the most reliable and accurate biometric identification system. The richness and apparent stability of the iris texture make it a robust biometric trait for personal authentication. The performance of an automated iris recognition system is affected by the accuracy of the segmentation process used to localize the iris. Most iris recognition systems consist of an automatic segmentation system that is based on the Hough transform. These systems localize the circular iris and pupil region. However, it is difficult to segment iris images acquired under nonideal conditions using such conic models. In this paper, a novel iris segmentation scheme employing geodesic active contours (GACs) to extract the iris from the surrounding structures is described. The proposed scheme elicits the iris texture in an iterative fashion and is guided by both local and global properties of the image.

TRANSCRIPT

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056 VOLUME: 02 ISSUE: 01 | APR-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 121

    Application of Geodesic Active Contours in iris Segmentation

    Kapil Rathor1

    1 Assistant professor, EXTC, St. John College of Engineering and Technology, Palghar, Maharashtra, India ----------------------------------------------------------------***-------------------------------------------------------------- Abstract A biometric system provides

    automatic identification of an individual based

    on a unique feature or characteristic possessed

    by the individual. Iris recognition is the most

    reliable and accurate biometric identification

    system. The richness and apparent stability of the

    iris texture make it a robust biometric trait for

    personal authentication. The performance of an

    automated iris recognition system is affected by

    the accuracy of the segmentation process used to

    localize the iris. Most iris recognition systems

    consist of an automatic segmentation system that

    is based on the Hough transform. These systems

    localize the circular iris and pupil region.

    However, it is difficult to segment iris images

    acquired under nonideal conditions using such

    conic models. In this paper, a novel iris

    segmentation scheme employing geodesic active

    contours (GACs) to extract the iris from the

    surrounding structures is described. The

    proposed scheme elicits the iris texture in an

    iterative fashion and is guided by both local and

    global properties of the image.

    Index Terms Geodesic active contours

    (GACs), iris codes, iris recognition, iris

    segmentation, level sets.

    1. INTRODUCTION

    With increase in terrorism and illegal acts, there is a growing demand for more secure and reliable identification in our society that can replace the traditional means of identification. Biometric technologies, based on recognition of humans based on behavioral or physiological characteristics, promises to be an effective solution. Biometric recognition can be described as automated methods to accurately recognize individuals based on distinguishing physiological and/or behavioral traits. It is a subset of the broader field of the science of human identification. Biometrics offers the means to identify individuals without requiring that they carry ID cards and badges or memorize passwords. Examples of biometric technologies

    include fingerprint recognition, face recognition, iris recognition and many others. The parts of body and behavior have been used for years as a means of person recognition and authentication. For example, fingerprint has been used for a long time in security and access applications. In comparison to other biometric features such as face, fingerprint, retina, and hand geometry, iris is seen as a highly reliable biometric technology because of its stability, and high degree of variation between individuals. The iris is seen as a highly reliable and accurate biometric technology because each human being is characterized by unique irises that remain relatively stable over the life period. Iris is present in the form of ring around pupil of a human eye in all the human beings. Its complex pattern contains many distinctive features such as arching ligaments, crypts, radial furrows, pigment frill, Pupillary area, ciliary area, rings, corona, freckles and zigzag collarette [1][2] which gives a unique set of feature for each human being, even irises of identical twins are different. Surface of the iris is composed of two regions, the central Pupillary zone and the outer ciliary zone. The collarette is the border between these two regions. The collarette region is less sensitive to the pupil dilation and usually unaffected by the eyelashes and eyelids. [3]

    1.1 Features of the human iris Now some of the visible features of the human iris will be described, which are important to identify a person, especially pigment related features, features controlling the size of the pupil, visible rare anomalies, pupil, pigment frill and collarette. The crypts, in the figure 1 shown as number 5, are the areas in which the iris is relatively thin. They have very dark colour due to dark colour of the posterior layer. They appear near the collarette, or on the periphery of the iris. They look like sharply demarcated excavations. The pigment spots, in the figure 1 shown as number 6, are random concentrations of pigment cells in the visible surface of the iris and generally appear in the ciliary area. They are known as moles and freckles with nearly black colour.

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056

    VOLUME: 02 ISSUE: 01 | JAN-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 122

    Features controlling the size of the pupil are radial and concentric furrows. They are called contraction furrows and control the size of the pupil. Extending radically in relation to the center of the pupil are radial furrows. The typical radial furrows may begin near the pupil and extend through the collarette. The radial furrows are creased in the anterior layer of the iris, from which loose tissue may bulge outward and this is what permits the iris to change the size of the pupil. The concentric furrows are generally circular and concentric with the pupil. They typically appear in the ciliary area, near the periphery of the iris and permit to bulge the loose tissue outward in different direction than the radial furrows.

    Fig- 1: Features controlling the size of the pupil (1-pigment frill, 2-pupillary area, 3-collarette, 4-ciliary

    area, 5-crypts, 6-pigment spot) [4]

    Collarette is the boundary between the ciliary area and the pupillary area. It is a sinuous line shown as number 3 in Fig 1, which forms an elevated ridge running parallel with the margin of the pupil. The collarette is the thickest part of the human iris. The human iris may have some of the rare anomalous visible features. Due to aging or trauma, atrophic areas may appear on the iris, resulting in a "moth-eaten" texture. Tumours may grow on the iris, or congenital filaments may occur connecting the iris to the lens of the eye. [4]

    2. IRIS LOCALIZATION To find out the three boundaries for iris localization, first, find the inner boundary between the pupil and the iris. Second, find the outer boundary between the iris and the sclera. For finding these boundaries, an edge detector and some image processing methods are applied to an eye image. [5]

    2.1 Inner Boundary

    Though the pupil area has a low gray level and looks dark in the eye image, it can be found by edge detector. In addition, characteristics of the pupil remove some of the unnecessary areas and help to find the inner boundary. To find out the inner boundary, Canny edge detector is applied to the eye image after excluding unnecessary areas. [5]

    2.2 Exclusion on Unnecessary Areas

    The pupil belongs to the dark side and the noise of the reflection off glasses belongs to the light side in the each image. Therefore, it is possible to remove the light side which has higher gray level. The mean value represents the boundary between the light and dark side, so it can adjust gray levels which have lower than mean value, to the whole gray level [6].

    2.3 Edge image of eye by canny edge detection algorithm

    The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties to be used for further image processing. Before applying edge detection algorithm the image is filtered for removing noise contains from the image after that the canny edge detection algorithm is applied to the image.

    2.3.1 Smoothing In order to filter out any noise in the original image before trying to locate and detect any edges Gaussian filter can be used. As the Gaussian filter can be computed using a simple mask, it is used exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian smoothing can be performed using standard convolution methods. A convolution mask is usually much smaller than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time. The larger the width of the Gaussian mask, the lower is the detector's sensitivity to noise. The localization error in the detected edges also increases slightly as the Gaussian width is increased. [7]

    2.3.2 Edge Detection The Canny algorithm basically finds edges where the gray scale intensity of the image changes the most. These areas are found by determining gradients of the image. Gradients at each pixel in the smoothed image are determined by applying what is known as the Sobel-operator. [8]

    2.3.3 Outer Boundary

    It is very difficult to locate the boundary between the iris and the sclera when it is blurred. To find out the outer boundary, canny edge detector is applied to the original eye image again. After that the circular Hough transform is used to find the centre and radius of the iris. Completing the process of Hough transform the Hough accumulator contains the values of number of circle passing to the particular point. So the point from which maximum

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056 VOLUME: 02 ISSUE: 01 | APR-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 123

    circles are passing is the centre of the iris and corresponding radius is the radius of the iris. [5]

    3. IRIS SEGMENTATION USING GACS (GEODESIC ACTIVE CONTOURS)

    The iris localization procedure by using GACs can be broadly divided into two stages: A. Pupil segmentation and B. Iris segmentation.

    3.1 Pupil Segmentation

    To detect the pupillary boundary, the eye image is first smoothed using a 2-D median filter and the minimum pixel value is determined. The iris is then binarized using a threshold value. Fig. 2(b) shows an iris image after binarization.

    Fig- 2: Pupil binarization. (a) Image of an eye with dark eyelashes. (b) Threshold binary iris image. [9] As expected, apart from the pupil, other dark regions of the eye (e.g., eyelashes) fall below this threshold value. A 2-D median filter is then applied on the binary image to discard the relatively smaller regions associated with the eyelashes. This reduces the number of candidate iris pixels detected as a consequence of thresholding as seen in Fig. 3(a). Based on the median-filtered binary image, the exterior boundaries of all the remaining objects are traced as shown in Fig. 3(b). Generally, the largest boundary of the remaining regions of the eye corresponds to the pupil.

    Fig-3: Pupil Segmentation. (a) 2-D Median filtered binary iris image. (b) Traced boundaries of all the remaining objects in the binary image (shown in gray color). (c) Fitting circle on all potential regions where the pupil might be present (shown in gray). [9]

    3.2 Iris Segmentation

    To detect the limbic boundary of the iris, a novel scheme based on a level sets representation [10][11] of the GAC model is employed. This approach is based on the relation between active contours and the computation of geodesics (minimal length curves). [12] The technique is to evolve the contour from inside the iris under the influence of geometric measures of the iris image. GACs combine the energy minimization approach of the classical snakes and the geometric active contours based on curve evolution.

    3.2.1 GACs (Geodesic Active Contours) Let (t) be the curve, that has to gravitate toward the boundary of any object, at a particular time as shown in Fig. 4. The time corresponds to the iteration number. Let be a function defined as a signed distance function from the curve (t). Thus, distance of point to the curve (t) . (x , y) is signed distance of point (x , y) from the nearest point in the curve (t) . (x , y) =

    0 , if x, y is on the corve

    < 0, if x, y is inside the curve

    > 0, if x, y is outside the curve

    ..... (1)

    is of the same dimension as that of the image that is to be segmented. The curve (t) is a level set of the function . Level sets are the set of all points in where some constant. Thus, = 0 is the zeroth level set, = 1 is the first level set and so on. is the implicit representation of the curve (t) and is called as the embedding function since it embeds the evolution of (t). The embedding function evolves under the influence of image gradients and regions characteristics so that the curve approaches the boundary of the object. Thus, instead of evolving the parametric curve the embedding function itself is evolved. In our algorithm, the initial curve is assumed to be a circle of radius just beyond the pipullary boundary.

    Fig-4: Curve evolving towards the boundary of the object.

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056

    VOLUME: 02 ISSUE: 01 | JAN-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 124

    Let the curve be the zeroth-level set of the embedding function. This implies that

    d

    dt= 0

    By chain rule d

    dt=

    d

    dx

    dx

    dt+

    d

    dy

    dy

    dt+

    d

    dt

    i.e d

    dt= . (t)

    Splitting t the in the normal (N(t)) and tangential (T(t)) directions,

    = . ( . + . )

    Now since is perpendicular to the tangent to ()

    = . ( . (2)

    The normal component is given by

    =

    | |

    Substituting this in (2)

    = | |

    Let be a function of the curvature of the curve k, stopping function K (to stop the evolution of the curve) and the inflation force (to evolve the curve in the outward direction) such that

    = (

    + ). | |

    Thus, the evolution equation for such that remains the zeroth level set is given by

    = + + . (3)

    Where K, the stopping term for the evolution, is an image dependant force and is used to decelerate the evolution near the boundaries; is the velocity of the evolution; indicates the degree of smoothness of the level sets; and k is the curvature of the level sets computed as

    k = y

    22+ x2

    (x2+y

    2)32

    where is the gradient of the image in the x direction, is the gradient in the y direction, is

    the second-order gradient in the x direction, is

    the second-order gradient in the y direction and

    is the second-order gradient, first in the direction and then in the direction. Equation (3) is the level set representation of the GAC model. This means that the level-set C of is evolving according to

    = + (. ) (4)

    Where is the normal to the curve. The first term

    (k ) provides the smoothing constraints on the level sets by reducing the total curvature of the level

    sets. The second term (c )acts like a balloon force [14] and it pushes the curve outward towards the object boundary. The goal of the stopping function is to slow down the evolution when it reaches the boundaries. However, the evolution of the curve will terminate only when K=0, i.e., near an ideal edge. In most images, the gradient values will be different along the edge, thus, necessitating different K values. In order to circumvent this issue, the third

    geodesic term (. ) is necessary so that the curve is attracted toward the boundaries ( points toward the middle of the boundary). This term makes it possible to terminate the evolution process even if (a) the stopping function has different values along the edges, and (b) gaps are present in the stopping function. The stopping term used for the evolution of level sets is given by

    , = 1

    1 + ( , ,

    )

    where I(x,y) is the image to be segmented, and k and are constants. As can be seen, this term K(x,y) is not a function of and G(x,y) is Gaussian filter transfer function.

    4. PROPOSED METHOD:

    Consider an iris image to be segmented as shown in Fig. 5(a). The stopping function K obtained from this image is shown in Fig. 5(b) (In our implementation, k = 2.8 and = 8). As the pupil segmentation is done prior to segmenting the iris, the stopping function K is modified by deleting the circular edges because of the pupillary boundary, resulting in a new stopping function K. This ensures that the evolving level set is not terminated by the edges of the papillary boundary [Fig. 5(c)].

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056 VOLUME: 02 ISSUE: 01 | APR-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 125

    Fig- 5: Stopping function for the GACs. (a) Original iris image. (b) Stopping function K. (c) Modified stopping function K. [9] The evaluation equation is ,+1 ,

    = ,

    , ,

    + , . ,

    Where is the time step. In our implementation, is set to 0.05. The first term on the right-hand side of the above equation is the velocity term (advection term) and in the case of iris segmentation, acts as an inflation force. This term can lead to singularities and, hence, is discretized using upwind finite differences. [13] The upwind scheme for approximating is given by where

    = A

    A = min Dx

    i,j, 0

    2

    + max Dx+

    i,j, 0

    2

    + min Dx

    i,j, 0

    2

    + min Dx+

    i,j, 0

    2

    where Dx

    is the first-order backward difference of in the x-direction; Dx

    + is the first-order forward difference of in the x-direction; Dy

    is the first-order

    backward difference of in the y-direction; and Dy+ is

    the first-order forward difference of in the y-

    direction. The second term Ki,j ki,j

    t t is a

    curvature based smoothing term and can be discretized using central differences. In our implementation, c = 0.65 and = 1 for all iris images. The third geodesic term is also discretized using the central differences. After evolving the embedding function , the curve begins to grow until it satisfies the stopping criterion defined by the stopping function K. But at times, the contour continues to evolve in a local

    region of the image where the stopping criterion is not strong. This leads to over-evolution of the contour. This can be avoided by minimizing the thin plate spline energy of the contour.[15] By computing the difference in energy between two successive contours, the evolution scheme can be regulated. If the difference between the contours is less than a threshold (indicating that the contour evolution has stopped at most places), then the contour evolution process is terminated.

    5. IMPLEMENTATION OF IRIS SEGMENTATION USING GACS

    There are two steps in implementation

    5.1 Pupil segmentation

    a. To detect the pupillary boundary the eye image is first smoothened by 2-D Gaussian filter.

    b. After that the threshold is decided by M+13. Where M is the minimum value of gray level in the filtered image. This threshold is applied on the eye image to find the binary image.

    c. Again the 2-D median filter is applied on the image so that the part remaining in the image by eyelashes can be removed.

    d. After median filtering some part in pupillary area remains black in the binary image due to specular reflection during eye image acquisition. So for overcoming that the morphological closing operation is used by Matlab function imclose with disk structure 20.

    e. At last the Pupillary boundary is located on the eye image.

    So after all these steps the Pupillary boundary can be located as shown in figure 6.

    (a) (b)

    Fig - 6: (a) Eye Image (b) Eye image with Pupillary boundary localization

    5.2 Iris Segmentation

    A contour is first initialized near the pupil. The embedding function is initialized as a signed distance function to (t = 0). Mesh plot of that function looks like a cone. The evaluation equation is

    i,jt+1

    i,j t

    t= cKi,j

    t Ki,j ki,j

    t t + i,jt .Ki,j

    t

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056

    VOLUME: 02 ISSUE: 01 | JAN-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 126

    After evolving the embedding function according to above equation, the curve begins to grow until it satisfies the stopping criterion defined by the stopping function K. 5.2.1 Stopping function of the eye image

    The stopping function K x, y = 1

    1+(G x ,y I x ,y

    k)

    can

    be found by applying Gaussian filter G(x,y) on the image and taking value of constant = 10 and k = 1.6. Now the modified stopping function (K) is determined by deleting the pupillary boundary in the stopping function (K). So the evolution of the curve and the mesh plot of corresponding embedding functions at final level of iteration as shown in Fig. 7.

    (a) (b) Fig-7: Evolution of the GAC during iris segmentation at final level iteration. (a) Segmented iris image (b) Mesh plot of Corresponding embedding functions

    6. RESULTS

    6.1 Application on Image of CASIA Data base

    6.1.1 Pupil Segmentation

    (a) (b) (c)

    (d) (e) Fig-8: (a) Eye Image of CASIA Database, (b) Eye image after thresholding, (c) Iris-pupil boundary of iris, (d) Stopping function (K) of eye image, (d) Modified stopping function (K) of eye image

    6.1.2 Segmentation of iris using GACs The Iris Segmentation can be shown by table given bellow:

    Table-1: Evolution of the GAC during iris segmentation at different iteration

    Sr. no.

    Number of

    iteration

    Contours on different

    iterations

    Corresponding embedding

    functions by a mesh plot

    1. Initial

    2. 200th

    3. 1400th

    4. 3800th

    6.2 Application on Proprietary Database

    6.2.1 Pupil Segmentation

    (a) (b) (c)

    (d) (e) (f)

    Fig-9 (a) Eye Image of Proprietary Database, (b) Eye image after thresholding affected by specular reflection, (c) Thresholding image after applying morphological closing operation (d) Iris-pupil boundary of iris, (e) Stopping function (K) of eye image, (f) Modified stopping function (K) of eye image

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056 VOLUME: 02 ISSUE: 01 | APR-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 127

    6.2.2 Segmentation of iris using GACs

    The Iris Segmentation can be shown by table given bellow: Table-2: Evolution of the GAC during iris segmentation at different iteration

    Sr. no

    .

    Number of

    iteration

    Contours on different

    iterations

    Corresponding embedding

    functions by a mesh plot

    1. Initial

    Contour

    2. 200th

    3. 1000th

    4. 3400th

    7. CONCLUSION

    The process of segmenting the iris plays a crucial role in iris recognition systems. Traditionally, iris systems have employed the integro-differential operator or its variants to localize the spatial extent of the iris. In this paper, a novel scheme using GAC for iris segmentation has been discussed. The GAC scheme is an evolution procedure that attempts to elicit the limbic boundary of the iris as well as the contour of the eyelid in order to isolate the iris texture from its surroundings. Experimental results on the Proprietary and the CASIA -Interval datasets indicate the benefits of the proposed algorithm. The

    stopping criterion for the evolution of GACs is image independent and does not take into account the amount of edge details present in an image. Thus, if the iris edge details are weak, the contour evolution may not stop at the desired iris boundary leading to an over segmentation of the iris. Over segmentation can be avoided by developing an adaptive stopping criterion for the evolution of the GACs. ACKNOWLEDGEMENT Nothing in this world can be accomplished without the blessing of God, the Almighty. Therefore, at the outset, I would like to thank him with the blessing of whom, this arduous work could take its shape. I wish to thank Prof. Rekha Vig (EXTC, Department, MPSTME, Mumbai), Mr. Santosh Kumar Soni for their valuable advice and support. REFFERENCE

    [1] J. G. Daugman, "How iris recognition works",

    IEEE Trans. on circuits and Systems for Video Technology, vol. 14, no. 1, January 2004, pp. 21-30.

    [2] J.G. Daugman. The importance of being random: Statistical principles of iris recognition. Pattern recognition, 36(2), (2003) 279291.

    [3] K. Roy and P. Bhattacharya "An Iris Recognition Method based on Zigzag Collarette Area and Asymmetrical Support Vector Machines", in IEEE conference on Systems, Man, and Cybernetics (SMC'2006), 8-11 October, 2006, Taiwan. Pages 861- 865.

    [4] Ale Muroo, Jaroslav Pospil, THE HUMAN IRIS STRUCTURE AND ITS USAGES, acta univ. palacki. olomuc., fac. rer. nat. (2000), physica 39, 87-95.

    [5] Hanho Sung, Jaekyung Lim, Ji-hyun Park, Yillbyung Lee Division of Computer and Information ngineering Yonsei University 134 Shinchon-dong, Seodaemoon-gu, Seoul 120-749, KOREA, Iris Recognition Using Collarette Boundary Localization, Proceedings of the 17th International Conference on Pattern Recognition (ICPR04) 1051-4651/04 $ 20.00 IEEE

    [6] Gonzalez, R.C., and Woods, R.E., Digital Image Processing, Prentice Hall, 2001.

    [7] Srikanth Rangarajan, Algorithms for edge detection.

    [8] John Canny, A computational approach to edge detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, PAMI-8(6):679698, Nov. 1986.

    [9] Samir Shah and Arun Ross, Member, IEEE, Iris Segmentation Using Geodesic Active Contours,

  • INTERNATIONAL RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY (IRJET) E-ISSN: 2395 -0056

    VOLUME: 02 ISSUE: 01 | JAN-2015 WWW.IRJET.NET P-ISSN: 2395-0072

    2015, IRJET.NET- All Rights Reserved Page 128

    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 4, NO. 4, DECEMBER 2009

    [10] J. A. Sethian, A review of recent numerical algorithms for hypersurfaces moving with curvature dependent speed, J. Different. Geometry, vol. 31, pp. 131161, 1989.

    [11] R. Malladi, J. A. Sethian, and B. C.Vemuri, Shape modeling with front propagation: A level set approach, IEEE Trans. Pattern Anal. Mach.

    [12] V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, Int. J. Comput. Vision, vol. 22, no. 1, pp. 6179, Feb./Mar. 1997. Intell., vol. 17, no. 2, pp. 125133, Feb. 1995.

    [13] J. Sethian and J. Strain, Crystal growth and dendritic solidification, J. Computat. Phys., vol. 98, pp. 231253, 1992.

    [14] L. D. Cohen, On active contour models and balloons, Comput. Vision, Graph., Image Process.: Image Understand., vol. 53, no. 2, pp. 211218, 1991.

    [15] F. L. Bookstein, Principal warps: Thin-plate splines and the decomposition of deformations, IEEE Trans. Pattern Anal. Mach. Intell., vol.11, no. 6, pp. 567585, Jun. 1989.

    BIOGRAPHIES

    Kapil Rathor obtained a degree in Master of Technology (Electronics and communication) from NMIMS, Mumbai in June 2013. His M.Tech. research is related to Biomtrics and

    Image Processing. He has done his research work in C-DAC, Mumbai. Currently He is working as Assistant professor in St. john College of Engineering and Technology, Palghar, INDIA (Electronics and Telecommunication). His one research paper, titled by Application of Image Processing in Iris Segmentation for a Biometric System Based on Iris, has been published in International Journal of Digital Signal and Image Processing (IJDSIP). Also other research paper titled by Iris collarette boundary localization using 2-d dft for iris based biometric system has been published in International journal of Advance research in computer Engineering and technology (IJARCET).