rapport vein

Upload: imane-oubalid

Post on 14-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Rapport Vein

    1/93

    Biometric Identification using Hand Vein Patterns

    Electronics & IT

    P6 Student Project

    Spring Semester 2011

    Group 620

    Department of Electronic Systems

    Aalborg University

  • 7/30/2019 Rapport Vein

    2/93

  • 7/30/2019 Rapport Vein

    3/93

    Department of Electronic Systems

    Electronics & IT

    Fredrik Bajers Vej 7 B

    9220 Aalborg

    Phone 9940 8600

    http://es.aau.dk

    Title: Biometric Identification using HandVein Patterns

    Subject: Information Processing Systems

    Project period:P6, spring semester 2011

    Project group:620

    Participants:Claire PetitimbertMarion DistlerNiels G. MyrtueSebastian N. Jensen

    Supervisors:Thomas B. MoeslundKamal Nasrollahi

    Number of copies: 7

    Number of pages: 93 (last page is p. 85)

    Attachments: 1 cd

    Appendices: 3

    Ended the 30-05-2011

    Abstract:

    This report describes the design and implementation of

    a system for identifying individuals based on their hand

    vein pattern. The main goal has been to produce a cheap,

    functional system with a low False Acceptance Rate.

    Near Infrared Imaging has been used since this produces

    a more clear vein pattern than visible light. For this pur-

    pose, a consumer webcam of the model Logitech Web-

    cam pro 9000 was modified to capture images in the NIR

    spectrum. A setup also has been built to constrain thehand and allow for a fixed region on interest. The system

    first preprocesses the images using Gaussian and median

    filters for smoothing and noise removal and histogram

    stretching for contrast enhancement. Next, image seg-

    mentation is performed using a filter that emphasizes the

    directional nature of veins along with local thresholding.

    The image is then postprocessed to remove noise, using

    morphological operations and blob removal. Afterwards

    the entire pattern is thinned to a 1-pixel thick skeleton

    which is used as a feature for matching and recognition.

    Images are matched using a Modified Hausdorff Distance

    which produces an average distance between two thinnedvein patterns. A threshold is used to decide if two vein

    patterns are similar or not.

    The system was trained using a training data set consist-

    ing of 20 persons and validated using another data set

    consisting of 7 persons. Test subjects were mainly cau-

    cassian male in age group of 19-25. Testing resulted in a

    total True Acceptance Rate of 65% and a False Accep-

    tance Rate of 0% as well as a Failure to Enroll Rate of

    16.25%.

    The contents of this report is freely available, but publication (with source reference) is only permitted as agreed with the authors.

  • 7/30/2019 Rapport Vein

    4/93

  • 7/30/2019 Rapport Vein

    5/93

    Preface

    This report has been produced by student group 620 at the School of Information andCommunication Technology (SICT), Aalborg University in the period from 01-02-2011 -30-05-2011. It is a 6th semester bachelor project at the Department of Electronic Systems- Electronics and IT and has been created in cooperation with supervisor Thomas BaltzerMoeslund and co-supervisor Kamal Nasrollahi. The general theme for the project isInformation Processing Systems and the title is Biometric Identification using Hand Vein

    Patterns. The purpose of the project has been to design and implement an identificationsystem that uses the hand vein pattern of a person.

    Because the project is made mainly for educational purposes, the final product is notconsidered as having commercial value.

    The report follows the general structure of a pattern recognition system: Image Ac-quisition, Preprocessing, Segmentation, Postprocessing, Feature Extraction and Recog-nition. Appendices are placed in the end of the report and contain information aboutparameter determination and some implementation details. The appendices are denotedA and B. Along with the report a CD is enclosed, containing, developed software, rawtest results and a PDF version of this report. Throughout the report, material on theCD is referenced in this way 1 , the full path can be found as a footnote. All externalliterature used in the project is referenced as [number,pages] and refer to the list of ref-

    erences at the end of the report. An example of a reference could be [1, p. 1]. A list ofabbreviations used in the report can be found on page 78.

    Claire Petitimbert Marion Distler

    Niels G. Myrtue Sebastian N. Jensen

    1path/to/item

    v

  • 7/30/2019 Rapport Vein

    6/93

    Contents

    1 Introduction 11.1 Scope of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    2 An Identification System Based On The Vein Pattern 32.1 Biometrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    2.1.1 Background & Purpose of Biometrics . . . . . . . . . . . . . . . . 42.1.2 Limitations of Biometrics . . . . . . . . . . . . . . . . . . . . . . . 4

    2.2 Hand Vein Patterns as a Biometric . . . . . . . . . . . . . . . . . . . . . . 42.2.1 Uniqueness of Vein Patterns . . . . . . . . . . . . . . . . . . . . . . 42.2.2 Vein Pattern and Near Infra-Red (NIR) Imaging . . . . . . . . . . 62.2.3 Blood Vein Pattern: a Good Biometric Feature? . . . . . . . . . . 7

    2.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3.1 System Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3.2 Performance Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2.4 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4.1 ROI extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4.2 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4.3 Feature Extraction & Matching . . . . . . . . . . . . . . . . . . . . 9

    2.4.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3 System Overview 13

    4 Image Acquisition & Preprocessing 154.1 Capture Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    4.1.1 Modification of the camera . . . . . . . . . . . . . . . . . . . . . . 174.1.2 Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.1.3 Hand Constraints & Region Of Interest . . . . . . . . . . . . . . . 18

    4.2 Preprocessing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.2.1 Smoothing & Noise Removal . . . . . . . . . . . . . . . . . . . . . 204.2.2 Contrast Enhancement with Histogram Stretching . . . . . . . . . 21

    4.3 Summary & Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    5 Image Segmentation & Postprocessing 255.1 Repeated Line Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    5.1.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2 Edge Detection Using Laplacian of Gaussian . . . . . . . . . . . . . . . . 27

    5.2.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.2.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    5.3 Adaptive Local Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.3.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    5.4 Direction Based Vascular Pattern Extraction . . . . . . . . . . . . . . . . 305.5 Performance & Choice of Method . . . . . . . . . . . . . . . . . . . . . . . 325.6 Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    vi

  • 7/30/2019 Rapport Vein

    7/93

    5.6.1 Morphological Operators . . . . . . . . . . . . . . . . . . . . . . . 355.6.2 Blob Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    5.7 Summary and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    6 Feature Extraction & Recognition 45

    6.1 Feature Extraction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 456.1.1 Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.1.2 End and Crossing Points . . . . . . . . . . . . . . . . . . . . . . . . 48

    6.2 Recognition methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496.2.1 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . 496.2.2 Modified Hausdorff Distance . . . . . . . . . . . . . . . . . . . . . 526.2.3 Delaunay Triangulation . . . . . . . . . . . . . . . . . . . . . . . . 53

    6.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.3.1 Principal Component Analysis (PCA) . . . . . . . . . . . . . . . . 546.3.2 Delaunay Triangulation . . . . . . . . . . . . . . . . . . . . . . . . 546.3.3 Modified Hausdorff Distance (MHD) . . . . . . . . . . . . . . . . . 55

    6.4 Choice of Method & Matching Procedure . . . . . . . . . . . . . . . . . . 56

    6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    7 System Training & Validation 597.1 Enrollment Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597.2 Determining the Necessary Amount of Matching Samples . . . . . . . . . 60

    7.2.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607.2.2 Results & Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    7.3 Determining Threshold Value . . . . . . . . . . . . . . . . . . . . . . . . . 607.3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617.3.2 Results & Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    7.4 Performance Test of the Recognition System . . . . . . . . . . . . . . . . . 677.4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677.4.2 Results & Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    7.5 Conclusion on Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    8 Conclusion & Perpectives 718.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718.2 Perspectives & Improvements . . . . . . . . . . . . . . . . . . . . . . . . . 72

    References 73

    Appendices 76A Determination of image processing parameters . . . . . . . . . . . . . . . 79

    A.1 Local Threshold: Window size . . . . . . . . . . . . . . . . . . . . 79

    A.2 Opening and Closing: Kernel size . . . . . . . . . . . . . . . . . . . 80B Implementation details for image processing . . . . . . . . . . . . . . . . . 83B.1 Linear Filter with an Extended Image . . . . . . . . . . . . . . . . 83B.2 Morphological operators on images with objects on the edges . . . 85

    vii

  • 7/30/2019 Rapport Vein

    8/93

  • 7/30/2019 Rapport Vein

    9/93

    CHAPTER 1Introduction

    Nowadays a very large number of identification systems exist that are based on differenttypes of biometrics. This includes iris, fingerprint, retina, voice, face, palm and vascularpattern recognition. This last one is quite a new emerging technology in the biometricfield, and has been gradually adopted over the world. The idea of using the hand vascularpattern as a biometric was first considered in the early 1990s but it wasnt until 1997that a commercial product was developed. In 2000 it finally became popular when anapplication was created for personal identification based on the vein pattern on the backof the hand [1].

    Since its introduction, hand vein pattern technology has expanded to fingers and palmbased systems and was adopted in 2007 by the International Standard Organization (ISO)where the storage and transmission of vascular biometric images was standardized.

    The demand for secure identification systems has increased exponentially over thelast ten years. These systems are required to be very reliable but also easy to use sincetheir application is no longer restricted to high-security facilities. The advantages of handvein pattern recognition are due to the fact that veins lie underneath the skin, whichmakes them easily accessible for the system but also hard to alter. In this perspective,the accessibility of vein pattern compared with other biometrics and its ease of use havemade it a very interesting alternative for applications where a high level of security isrequired. It is also a good alternative to biometric systems that require physical contactin order to identify the individual, especially in environments such as hospitals wherehygiene has high priority.

    An identification system should be fast, simple and secure and due to its desirableadvantages, vein pattern technology is being considered into various authentication so-

    lutions for use in public places (access control, time and attendance, security, hospitals).The market for hand vascular pattern technology is growing rapidly and today it is anarea of ongoing research that draws a lot of attention.

    1.1. Scope of the Project

    As the ability to verify the identity of individuals has become increasingly importantin many areas of modern life, the need of cheap biometric recognition systems becomesgreater. Knowing this, it would be appropriate to build an application, based on the useof a cheap set up, which identifies an individual by extracting his hand vascular patternusing a Near Infra-Red (NIR) image.

    1

  • 7/30/2019 Rapport Vein

    10/93

    Chapter 1. Introduction

    Such a system could be used in different areas, mainly for physical access control andtime attendance, as for example in schools, libraries, airports, hospitals and banks.

    First the feasibility of identifying an individual based on his hand vein pattern willbe explored.

    2

  • 7/30/2019 Rapport Vein

    11/93

    CHAPTER 2An Identification System

    Based On The Vein Pattern

    This chapter explores the feasibility of using hand vein patterns as a biometric for iden-

    tification of individuals. First a brief introduction to biometrics is given, defining terms

    and listing the requirements that a good biometric should satisfy. Next the details of the

    hand vein pattern are analysed with respect to these requirements. This analysis leads to

    a problem statement that defines the goals and delimitations of the project. The chapter

    ends with a review of some of the related work that has been done in the field. This part

    also identifies the core elements that make up a hand vein pattern recognition system.

    2.1. Biometrics

    Oxford dictionaries define biometry as The application of statistical analysis to biologicaldata.[2]. In other words, biometry is simply the analysis of a given biological test subjectin search of patterns. In this sense, any property or feature of a biological subject,that can somehow be measured, can be considered a biometric. Despite this, the termbiometrics usually refers specifically to the measuring of human features. A. K. Jainet al. defines it as follows: Biometrics is the science of establishing the identity of anindividual based on the physical, chemical or behavioral attributes of the person[1, p. 1].These can be physical features such as fingerprints or behavioral such as walking patternor handwriting. A. K. Jain et al.[3] also defines the following requirements that a givenmeasure must satisfy to be a biometric:

    Universality: each person should have the characteristic measured.

    Distinctiveness: any two persons should be sufficiently different in terms of thischaracteristic.

    Permanence: this characteristic should be sufficiently invariant (with respect to thematching criterion) over a period of time.

    Collectability: this characteristic should be measurable quantitatively.

    3

  • 7/30/2019 Rapport Vein

    12/93

    Chapter 2. An Identification System Based On The Vein Pattern

    2.1.1 Background & Purpose of Biometrics

    Biometrics are used to identify or distinguish individuals based on their unique features.Historically, the main use of biometrics have been in criminal investigations and initiallythe methods were quite simple. Measurements were based on body features that were

    easily visible such as scars or birthmarks, or distances between individual body parts.While simple (albeit time consuming) to perform, these methods suffered from imprecisemeasurements and indistinct features which caused a higher risk of failed identification orfalse positives. In the late 19th century, the potential of fingerprints as a biometric wasacknowledged. This was a much more subtle feature, but with far more distinctivenessthan its predecessors. The fingerprint recognition system was soon the most commonlyused in English speaking countries [4].

    Today, fingerprints are just one of many biometrics used for identification purposes.While still serving as important tools in criminal investigation, biometrics are now alsoused in commercial products that require user authentication such as access control.Another possible application is in surveillance where facial recognition is employed. It isa field of biometrics that has been researched extensively, especially in the wake of the

    9/11 attacks [5] [6].As the price of electronic equipment has decreased over the years, affordable consumer

    products have emerged that implement biometrics in one form or another. An example isdigital cameras that can detect and focus on faces automatically; some laptop computersalso employ fingerprint or facial recognition for access control.

    2.1.2 Limitations of Biometrics

    With the popular and widespread use of biometrics, it is important to acknowledge thelimitations that are inherent in biometric recognition systems. The core limitation isthe fact that a system can never measure a given feature with absolute accuracy: goodmeasurements require precise equipment and usually the subject must be positioned in a

    certain way to achieve usable results. If for any reason the measurements are inaccurate,the risk of a system error increases greatly. The consequence of a system failure willdepend on the application. If a user is falsely rejected by a security system, the damageis usually limited as the user can just try again. If on the other hand an impostor isfalsely accepted by the system, the potential damage could be great. For this reason, ahigh False Acceptance Rate (FAR) is usually more critical for a security system than alow True Acceptance Rate (TAR). The choice of biometric can improve the TAR if it ispersistent and easy to measure accurately and the FAR can be reduced if it is differentfor all subjects.

    2.2. Hand Vein Patterns as a Biometric

    The previous section defined biometrics and listed a number of requirements that agiven measure must satisfy to be considered a good biometric. These are universality,distinctiveness, collectability and time-invariance. In this section, the human vein patternis explored in detail with respect to these requirements. The purpose is to determine ifthe vein pattern qualifies as a good biometric.

    2.2.1 Uniqueness of Vein Patterns

    To the knowledge of the project group, at the time of writing no research has beendone with the intent to prove the uniqueness of the human vein pattern. However,a lot of research does exist that indirectly relates to this uniqueness. This is mainlyresearch on the genesis and development of the blood vessels, and it provides a compelling

    4

  • 7/30/2019 Rapport Vein

    13/93

    Section 2.2. Hand Vein Patterns as a Biometric

    argument for uniqueness. Another argument is based on the successful application of veinpatterns as a biometric in commercial products. The following subsections explore thesearguments.

    2.2.1.1 Biological Arguments

    The claim of uniqueness of the vein pattern can be supported by biological argumentsresulting from the development and spatial arrangement of the vascular pattern.

    Vascular DevelopmentEichmann et. al [7] have studied the development of human vascular system and theirresearch forms a basis for an argument for uniqueness. This subsection summarizesimportant points from their research.

    Histologically speaking the composition of blood vessels is quite uncomplicated. In-deed the composition of capillaries, the smallest branches of the vascular system, is madeof a basement membrane that surrounds endothelial cells. These endothelial cells repre-

    sent the major cellular compartment of the vascular system.The cardiovascular system is one of the more important organs in the human body

    since it carries nutriments and oxygen to all tissue in the body. Therefore it seems logicalthat its one of the first organs to be formed in the embryonic development. Its formationstarts with the creation of the mesoderm layer using differentiation of the endothelialcells. It is the mesoderm that will later divide into structures in order to form the majorembryonic vessels. Once the embryonic vessels are formed, they differentiate into arteriesof veins in order to form a real primary circulation that works well with the heart.

    Recently some molecules have been discovered to be present in the early stage ofdevelopment in the endothelial cells, they label them as arterial or venous. In the nervoussystem, the same molecules are implicated in establishment of cell boundaries and in theguidance of developing axons. The discovery of these molecules has opened speculations

    regarding the similarities in the development of the nervous and the vascular system, andregarding the role they may play in vessel guidance during embryonic development.

    This observation concludes that the development of the vascular system is complexand quite random. Due to such a complexity and randomness, these observations makea case for unique vein patterns in every individual.

    Spatial arrangementNadort[8] makes an argument for uniqueness based on the vascular spatial arrangement.Her arguments are summarised in this subsection.

    The coronary patterns are believed to be quasi-fractal since the branching parameters,even though not identical, always fall within the same values [9]. Branching parametersrefer to the diameter relationship of vessels involved in a branching point and anglestaken by the new branches with respect to the parent vessels direction.

    Thanks to several biological models, such as those described in [9] [10] [11], made topredict the essential feature of the veins pattern, it has been shown that the sources ofvariability in branching parameters in a real physiological system are yet to determined.This conclusion arose when all of the results of these models were shown not to be inaccordance with a real vascular system. Therefore this variability is not just randomand branching values are likely to be influenced by the surrounding conditions such aslocal flow requirement or local anatomy. Even though branching parameters are nottotally random, there are still tremendous variabilities that provide a good argument fora unique vascular pattern in every individual.

    Both biological arguments (vascular development and spatial arrangement), althoughthey are based on research that is not focused on demonstrating the claim of uniqueness

    5

  • 7/30/2019 Rapport Vein

    14/93

    Chapter 2. An Identification System Based On The Vein Pattern

    of vein patterns, lead to the same conclusion: that the vascular pattern is most likelyunique for every individual.

    2.2.1.2 Argument from Application

    Blood vessel patterns have already been used in a great number of commercial identifi-cation and authentication systems. These systems are mainly based on either retina orhand vein pattern recognition. The former is used in a large number of high securityauthentication systems because it is very hard to forge. But this kind of scanner is notvery practical for massive commercial use due to the fact that the subject has to moveclose to the device and look in a particular direction in order for it to collect the dataneeded. However, other commercial devices exist that use blood vessels from more ac-cessible areas of the body such has fingers, palm and the back of the hand. Their use isalso expanding. A commercial palm vein authentication system is for example used inFrance at the Graduate Management Admission Test to enroll students at the beginningof each exam [12].

    So as shown previously, biometric technologies based on vein patterns are quite widelyused and considerable databases, constituted by all these retina and hand biometricsystems, have already been collected. Identical patterns on different persons have yet tobe reported by any studies [8].

    2.2.1.3 Conclusion on the Uniqueness of Vein Pattern

    As in the early days of using fingerprints as a biometric feature, even though biologicalarguments tend to demonstrate the uniqueness of such a feature, it can never be provedtheoretically with a 100% certainty. But the greater the database and use of fingerprintsbecame, the more certain we were about the uniqueness of it, since no two persons havebeen found yet with the same fingerprints. Fingerprints have been used for identificationfor over 100 years; blood vessel patterns havent been used for that long.

    To conclude, studies show that even though the development of the vein pattern isnot entirely random, the difference between individuals vein pattern is still great, andwhile the collective database expands rapidly, no two identical patterns have yet beenobserved. Therefore even though vein patterns have not been universally declared asunique for every individual, it is a valid assumption to make.

    2.2.2 Vein Pattern and NIR Imaging

    In order to collect the vein pattern, an image of it are needed. The structure of the veinpattern can be obtained either through thermal imaging, also called Far Infra-Red, orthrough NIR imaging. Since a thermal camera costs several thousands US dollars [13],NIR is a more viable option for building a cheap capture setup. That is why this part isgoing to focus on NIR imaging of the hand.

    The human body radiates infrared light, but only in the range of 3000 - 14000 nm,with an intensity high enough to be picked up (10mW/cm2). Natural radiation in theNIR, on the other hand, is not strong enough to be detected by devices [ 8]. Thereforeif NIR is used, the hand needs to be irradiated and it is because of the absorption andreflection properties of the body that a useful image can be captured.

    The optical properties of the tissues, such as absorption and reflection, are determinedby their chemical composition. So in order to understand the output of infra-red imagingof the hand it is necessary to take a closer look at the optical properties of its mainchemical components. The dominant component of the skin is water: it absorbs lightwith wavelength below 300 nm and above 1000 nm.Two other components, hemoglobinand melanin, are the main absorbent of light in the visible spectrum (400 - 650 nm).

    6

  • 7/30/2019 Rapport Vein

    15/93

    Section 2.2. Hand Vein Patterns as a Biometric

    Therefore an tissue optical window is left out, from 650 to 1100 nm, where the light isable to penetrate deeper into the skin thanks to a weak overall absorption from the outercomponents of the skin. From 650 to 1100 nm, absorption from the main components ofthe blood, deoxy- and oxyhemoglobin, dominates [8].

    Therefore when NIR is used, the veins absorb more light than the surrounding tissuesas seen in Figure 2.1, so this kind of imaging is suited to reveal the underlying patternof the blood vessels. And by doing so the blood vein pattern is collectible.

    Figure 2.1: Picture of the back of a hand in NIR

    2.2.3 Blood Vein Pattern: a Good Biometric Feature?

    As said previously in section 2.1 on page 3, a good biometric feature needs four quali-ties: universality, distinctiveness, collectability and permanence. With respect to thesequalities, the following can be concluded about biometric recognition using hand vein

    patterns:

    Universality: It is safe to say that, except in rare cases, everyone has hands andblood vessels in them. Vein patterns can thus be considered universal.

    Distinctiveness: Based on the discussion above, it can be concluded that the dis-tinctiveness of the vein pattern is a valid assumption.

    Collectability: Thanks to techniques of NIR imaging the pattern can be extractedand the features measured.

    Permanence: The development of the hand vein pattern stops around the age of20. Thereafter, only minor changes occur due to aging of the bones. Consideringa healthy person with no vascular diseases that has not had his vascular pattern

    7

  • 7/30/2019 Rapport Vein

    16/93

    Chapter 2. An Identification System Based On The Vein Pattern

    surgically altered, it is safe to assume that the only changes occurring in the vesselare vasoconstriction and vasodilation (shrinking and expansion of the vessels di-ameter) due to temperature, activity etc. Therefore it can be concluded that thegeneral pattern of the hands blood vessels is permanent [8].

    Based on this discussion, the hand blood vessel pattern can be considered a good bio-metric feature.

    2.3. Problem Statement

    Based on the previous analysis the following problem statement can be made.

    How it is possible to identify an individual based on the structure of his or herhand vein pattern?

    With this as a starting point, the following questions can be asked:

    What should a hand vein pattern recognition system consist of?

    What are methods available for vein pattern recognition, and how can they becombined?

    How is it possible to make a cheap vein pattern capture setup that still gives goodand stable images?

    How can the performance of such a system be tested?

    2.3.1 System Limitations

    Due to time constraints, some limitations have been imposed on the project.

    The subjects hand is constrained so that a fixed Region Of Interest (ROI) can beused.

    Image capturing, segmentation and recognition are performed separately and notin realtime.

    2.3.2 Performance Criteria

    The purpose of the project is to create a system that can be used for identification ofindividuals. The highest priority is therefore to achieve a low FAR so that a person is notfalsely recognized as another. A lower TAR is less critical since a person being rejected

    can just try again. The criteria is thus to achieve a minimal FAR while maintaining anacceptable TAR.

    2.4. Related Works

    This section explores the utilization of hand vein patterns within a pattern recognitionidentification system. Although hand vein pattern recognition has not been studied asthoroughly as other biometrics, a lot of research still exists on the subject. Examplesof these related works are given in the following subsections. Since the same generalstructure is used in all systems, the proposed methods are grouped by their purpose inthe system.

    8

  • 7/30/2019 Rapport Vein

    17/93

    Section 2.4. Related Works

    2.4.1 ROI extraction

    Extraction of the ROI is the first step after image capture. The problem with ROIextraction is that the region extracted has to be of the same size and position for everypicture taken. To resolve this, two kinds of method for extracting the ROI exist. The

    first kind, used for example by Wang and Leedham [14], proposes to fix the hand at acertain position that is immediately below the camera. The image is then always croppedin the same way, and so it permits to obtain the same region every time but needs handposition constraints. A second kind of more complex ROI extraction methods also exists.These methods base the extraction on information taken from the captured image. Forexample Kumar and Prathyusha [15] select key points of the contour of the hand in orderto determine the ROI. This allows extraction of the same ROI every time, but with moreflexibility with regards to the position of the hand.

    2.4.2 Image Processing

    The cropped version of the captured image is composed of the vein pattern, along with

    unnecessary and unwanted information that needs to be removed. The system step whichaims to enhance and extract the vein pattern is called image processing. It is divided inthree steps: preprocessing, segmentation and post-processing.

    Preprocessing reduces noise in the cropped image which could be the result of hairsor poor camera performances. This helps to improve the quality later on when theimage is segmented. Different methods are used for preprocessing, and often they are acombination of several filters and algorithms [8]. Choi for instance, applies a high-passfilter and then histogram based binarisation. Both of these are made to enhance thecontrast. Afterwards a median filter is applied to remove noise due to hair[16]. In adifferent paper, Wang and Leedham [17] used a median filter along with a Gaussian low-pass filter, for double noise reduction (stipple noise with the Gaussian filter and noise

    due to hairs with the median filter).Once the noise has been reduced, the image goes through segmentation. The seg-

    mentation is used to obtain a good binary representation of the hand vein pattern. Acommon method for segmentation is local thresholding. It makes it possible to obtaina good separation between vascular pattern and background, where global thresholdingwould not work due to the variation in grey-level values of the vein at different locations.A local thresholding algorithm was used by Wang and Leedham [17], by Ding, Zhuangand Wang [18] and by Cross and Smith [19]. Other methods for segmentation also existsuch as Direction Based Vascular Pattern, used by Im, Choi and Kim [20], repeated linetracking, which is used by Miura et al. [21] [22] and edge detection, developed for exam-ple by Lin and Fan [23]. All these segmentation methods have been fully tried and testedin different studies and all show good results on the separation between vein pattern and

    background.

    The post-processing is used to reduce the noise and remove blobs that are not part ofthe vein pattern. These post-processing algorithms are used to isolate the vein pattern.Once the vein pattern is segmented and post-processed, features still need to be extractedand matched.

    2.4.3 Feature Extraction & Matching

    Due to changes in the diameter of the veins, caused by different factors such as theambient temperature, exercise, etc, the segmented images of the same hand vary a bitfrom time to time. In order to get around this problem, the system needs only to analyse

    9

  • 7/30/2019 Rapport Vein

    18/93

    Chapter 2. An Identification System Based On The Vein Pattern

    the overall shape of the vein pattern. Thinning is a widely used method to extract thisshape. Indeed it produces a one-pixel wide line representation of the vein patterns. Ding,Zuang and Dang improved the thinning algorithm to get rid of some unnecessary one-pixel points [18]. Many papers, such as Wang and Leedham [17] and Cross and Smith[19], applied thinning followed by a pruning algorithm, to get rid of small unnecessary

    branches.

    The skeleton obtained from these different methods is then used to extract featuresand do the matching. Sometimes the skeleton itself is used as a feature and other timesfeatures such as cross- and endpoints are extracted.

    Many papers use the cross- and endpoint features to do the matching. For exampleTsinghya University based their matching on their coordinates in the thinned image [18].Ding, Zhuang and Wang improved these matching procedure by basing the matching onthe distances between cross- and endpoints [18]. Similar to the latter method, Choi usedthe branching characteristics (number of branches, and connections between crossingpoints) to match images [16].

    Wang and Leedham [14] used the Hausdorff distance. The Hausdorff distance is a

    natural measure for comparing similarity of shapes. It is a distance measure betweentwo point sets. Unlike Euclidean distance recognition technique that need a one-to-onecorrespondence between the template and the testing data, the Hausdorff distance canbe found without explicit points correspondence.

    Other methods were also tried and tested, for example Cross and Smith used themedial axis representation as the feature of the vein pattern and then applied the con-strained sequential correlation to match the pattern [19]. And Lin and Fan proposedthe use of multi-resolution analysis features to analyse the palm-dorsal vein patterns [23].

    Many matching procedures exist and have already shown good results. An evaluationof the performance of the different systems cited above is necessary to conclude if thesemethods could be applied in real life biometric recognition systems.

    2.4.4 Performance

    As mentioned earlier, the performance of a biometric system is essential to determinewhether the system has a potential to be applied in real life situations. Table 2.1 [8]shows the performance of some of the methods discussed earlier.

    10

  • 7/30/2019 Rapport Vein

    19/93

    Section 2.4. Related Works

    Subjects #imagesused fortest

    # Match attempts FAR FRR FTE

    Cross andSmith [19]

    20 2 40 genuine at-tempts 760 impos-

    tor attempts

    0% 7.5% 0% butnew trial

    1 outof 34subjectsfailed

    Wang andLeedham[17]

    12 6 72 genuine at-tempts 792 impos-tor attempts

    0% 0% 0%

    TsinghyaUniversity[18]

    13 5 (andreferenceimage)

    260 genuine at-tempts 3120 impos-tor attempts

    0% 4.6% 0%

    Harbin Uni-versity [18]

    48 5 (andreference

    image)

    960 genuine at-tempts 45120

    impostor attempts

    0% 0.8% 0%

    Miura (2004)[22]

    678 1 678 genuine at-tempts 459006impostor attempts

    0.145% 0.145% 0%

    Miura (2006)[21]

    678 1 678 genuine at-tempts 459006impostor attempts

    1% 0% 0%

    Lin and Fan[23]

    32 15 480 genuine at-tempts 14880impostor attempts

    3.5% 1.5% a few

    Table 2.1: Illustration of the performance of the different methods discussed before [ 8].

    These performances are measured by three key performance metrics as defined in [8]:

    FAR, the probability that an unauthorized person is accepted as an authorizedperson.

    False Rejection Rate (FRR), the probability that an authorized person is rejectedas an unauthorized person.

    Failure To Enroll (FTE), the probability that a given user will be unable to enrollin a biometric system due to an insufficiently distinctive biometric sample.

    These three metrics can lead to a wrong estimate performance if two of them are used

    without the third one. They are linked to determine the performance of a system anddependent on one another.

    11

  • 7/30/2019 Rapport Vein

    20/93

  • 7/30/2019 Rapport Vein

    21/93

    CHAPTER 3System Overview

    Pattern recognition systems can exist in endless forms, dealing with problems in manydifferent fields and using different methods to achieve their goals. In spite of this, most, ifnot all, systems tend to follow the same overall structure. As seen in the section 2.4, thestructure of a hand vein pattern recognition system can be summed up by the generalblock diagram shown in figure 3.1.

    Figure 3.1: Pattern recognition system diagram

    The hand vein pattern recognition system of this project is designed according tothis general structure. It has been implemented in C++ using Armadillo [24], a linearalgebra library, and the image library DevIL [25]. The following list explains the purposeof each block.

    3.0.4.1 Image Acquisition & Preprocessing

    Chapter 4 on page 15, the image acquisition and the preprocessing steps are discussedin details.

    The image of the hand is captured using an ordinary webcam that has been modifiedto only allow infrared light to reach the image sensor. To normalize the images and limitmovement and rotation of the hand, a setup has been built specifically for the purpose.This setup is described in further details in section 4.1 on page 15.

    After taking the image and before extraction of the vein pattern, preprocessing isapplied to the image. The purpose of this step is to improve the imaging quality so that

    13

  • 7/30/2019 Rapport Vein

    22/93

    Chapter 3. System Overview

    vein patterns can be more easily detected during the segmentation. This is done by firstcropping the image to isolate the ROI, described in section 4.1 on the next page. Andthen applying filters to reduce noise and enhance the contrast. Methods involved aredescribed in section 4.2 on page 20.

    3.0.4.2 Segmentation & Post-Processing

    Segmentation and post-processing steps are detailed in Chapter 5 on page 25.

    Once the noise has been reduced and the contrast enhanced, segmentation permits toseparate the vein pattern from the background. Indeed, the vein pattern is located andisolated from the rest of the image, thus binarizing it. This is the most crucial step inthe entire recognition process. If the veins are not properly detected, the risk of errorsincreases greatly. Thus, the chosen method plays a big role in the overall performanceof the system and is described in chapter 5 on page 25.

    The output image of the segmentation step is a binary image with some unwantedinformation such as noise, shadows and faint veins. Therefore it is not always a true rep-

    resentation of the actual vein pattern. Section 5.6 on page 35 details the post-processingstep, that attempts to clean up the image.

    3.0.4.3 Features Extraction & Matching

    The last two steps of the system, features extraction and matching, are described inChapter 6 on page 45.

    The feature extraction step aims to extract the actual features of the vein pattern,from an image, that then are going to be used for matching. If the image is a enrolledsample, the features are saved in a database for later matching. This step is describedin section 6.1 on page 45.

    Once the features are extracted, they are compared with the ones in the database

    and based on that comparison a decision is taken. The matching step is describedin section 6.4 on page 56. Basically the decision comes down to if the input featuresare similar to a set in the database, the image is identified accordingly, otherwise it isrejected.

    14

  • 7/30/2019 Rapport Vein

    23/93

    CHAPTER 4Image Acquisition &

    Preprocessing

    This chapter describes the first two steps in the vein recognition process: Image acquisi-

    tion, where an input image of the hand is captured and preprocessing where that image

    is prepared for further processing by the system. First the image capturing setup is de-

    scribed., this setup has been built to obtain stable input images and allow for a fixed ROI.

    Next the methods used for preprocessing of the input images are reviewed. The purpose of

    these methods is to reduce noise in the image and to enhance the contrast. The flowchart

    in figure 5.28 shows all the steps involved in image acquisition and preprocessing.

    4.1. Capture Setup

    The purpose of this project, is to create a capture setup that is efficient and has a lowcost. A cheap webcam (Logitech Webcam pro 9000 shown in Figure 4.1) was chosen tobe used for taking the pictures of the hand. The camera has been modified so it cancapture NIR images. A captured image of a hand in the NIR spectrum shows the veinsin black and the skin in white as explained in subsection 2.2.2.

    Figure 4.1: An unmodified Logitech Webcam Pro 9000 [26]

    15

  • 7/30/2019 Rapport Vein

    24/93

    Chapter 4. Image Acquisition & Preprocessing

    Figure 4.2: Illustration of the spectral response of a web cam with CMOS component[27].

    Figure 4.3: Illustration of the spectral response of a Kodak film negative [28].

    16

  • 7/30/2019 Rapport Vein

    25/93

    Section 4.1. Capture Setup

    4.1.1 Modification of the camera

    Most webcams have a CMOS image sensor, which is sensitive to both visible light andNIR light as shown in figure 4.2. Most are also sold with a filter which blocks all NIRlight in order to improve quality of the image in the visible spectrum. However the cam-

    era used for this project had this filter removed, and was thus already sensitive to bothvisible and NIR light. So in order to capture only images in the NIR spectrum, visiblelight must be blocked otherwise it would reduce contrast between veins and background.This would reduce usability of the image.

    The easiest and cheapest way to block the visible light is to use pieces of a colorphotographic negative as filter. For the project setup the film negative Kodak Gold ISO200 was used as it blocks most of the visible light while being very transparent to NIRlight. The figure 4.3 shows the response curve for a filter made from an exposed Kodakcolor film. It shows that the film negative is a very good filter for this purpose. Afterdeveloping the film, it is cut into small pieces, fitting the size of the webcam lens andfixed on it. Such a modified webcam produces pictures in NIR spectrum.

    4.1.2 Illumination

    Different factors influences the image quality and an important one is the lighting con-ditions. As stated previously in the subsection 2.2.2 the hand needs to be illuminated inorder for the device to obtain an image of the hand vein pattern. Hence Infra-Red ( IR)Light Emitting Diodes (LEDs) were added to the setup. The position of these LEDsare very important to get an uniform illumination. It has been suggested that a convexsurface such as the back of the hand, can be optimally lit at an angle of 55 degrees [8].The LEDs should be positioned on both sides of the hand with this angle in order toreduce shadows caused by small level differences on the hands surface. Moreover, theLEDs should be positioned higher than the camera to keep the light from entering the

    camera lens directly. Based on these spatial requirements, the design shown in figure 4.4was made. Some LEDs are added on both sides of the camera in order to improve thequality of the illumination.

    Figure 4.4: Illustration of the capture setup.

    17

  • 7/30/2019 Rapport Vein

    26/93

    Chapter 4. Image Acquisition & Preprocessing

    4.1.3 Hand Constraints & Region Of Interest

    After an image has been acquired the ROI needs to be extracted. As stated in the section1.1, the hand is constrained in order to restrict movement, making a fixed ROI feasible.

    There are several ways to constraint the hand to prevent rotation and translation. Ahand grip and a rod is used where the person rests his/her arm in order to prevent anyrotation of the hand. To prevent movements, two pins have been added, one on the handgrip, the other on the rod. By placing the arm as shown in the setup that is illustratedin Figure 4.5, most movement and rotation is prevented. The ROI is then obtained as a319x354 rectangle with upper left corner at pixel (167,52). The resulting ROIs extractedfrom different images (of the same hand and different hands) are shown in figures 4.6and 4.7.

    Figure 4.5: Illustration of the constraint method: the hand needs to be positioned inthat way before gripping the hand grip.

    Thanks to the constraints the captured image is approximately one of the same regioneach time with just few variations, as shown in figure 4.6 and figure 4.7.

    18

  • 7/30/2019 Rapport Vein

    27/93

    Section 4.1. Capture Setup

    Figure 4.6: Illustration of the ROI extraction of the same hand with two differentpictures.

    Figure 4.7: Illustration of the ROI extraction of the same hand with two differentpictures.

    19

  • 7/30/2019 Rapport Vein

    28/93

    Chapter 4. Image Acquisition & Preprocessing

    4.2. Preprocessing Methods

    This section describes the methods that are used to preprocess the input images. Thepreprocessing step serves two main purposes. The first is smoothing and noise removal.Since the images are captured using a modified consumer webcam, considerable noisecan occur in the images. Gaussian and median filters are used to remedy the effect ofthis noise. The second is contrast enhancement. This is necessary as the vein patterncan be faint. Histogram stretching is used to add contrast between the veins and thebackground.

    4.2.1 Smoothing & Noise Removal

    There are many ways to deal with noise in images. Some methods exploit the fact thatthe noise is a random variable with 0 mean that is added to the image. Thus by averagingthe image, the effect of the noise is canceled out. Unfortunately this produces a smearedversion of the image where small veins might be lost in the process.

    A different approach is to exploit the fact that the noise tends to have high frequencies,while the important features in the image do not.

    4.2.1.1 Gaussian Filter

    4 3 2 1 0 1 2 3 40

    .1

    .2

    .3

    .4

    Figure 4.8: A plot showing the 1-dimensional Gaussian distribution with = 1

    4 3 2 1 0 1 2 3 4 5

    0

    5

    0

    0.1

    0.2

    0.3

    0.4

    Figure 4.9: A plot showing the 2-dimensional Gaussian distribution with = 1

    A Gaussian filter is a smoothing filter based on the Gaussian distribution. It is suitablefor image noise removal because it acts as a low pass filter, attenuating high frequencynoise while leaving the lower frequency features unchanged. It is defined in one dimensionas follows:

    G(x) =1

    2e

    x2

    22 (4.1)

    where:

    is the standard deviation

    and in 2-D as:

    G(x, y) =1

    22e

    x2+y2

    22 (4.2)

    20

  • 7/30/2019 Rapport Vein

    29/93

    Section 4.2. Preprocessing Methods

    Figures 4.8 and 4.9 on the preceding page show plots of the two functions. The lowpass filter property of the Gaussian filter can be seen from its Fourier transform whichis itself a Gaussian function:

    G() = e22

    2 (4.3)

    This means that the filter attenuates rapid changes in an image, effectively smoothingit. The amount of smoothing depends on the chosen standard deviation . The higher is, the smoother the resulting image will be.

    The filter can be applied to an image, either by using a 2-D convolution ( 4.2) or byusing 1-D convolutions of each dimension separately (4.1). Good results were obtainedby setting = 0.5. This yields the following discrete kernel used in this project:

    0 0 0 0 1 0 0 0 00 0 2 11 18 11 2 0 0

    0 2 29 131 215 131 29 2 00 11 131 585 965 585 131 11 01 18 215 965 1592 965 215 18 10 11 131 585 965 585 131 11 00 2 29 131 215 131 29 2 00 0 2 11 18 11 2 0 00 0 0 0 1 0 0 0 0

    104

    4.2.1.2 Median Filtering

    Another source of noise in the images is hairs, that show up as very thin dark lines. Away to remove these is to use a median filter. The median filter works by replacing pixel

    values with the median value of that area. This is done by iterating through every pixelin an image and looking at its neighbours within a specified distance. These pixel valuesare then gathered and sorted. The value in the middle of the resulting set is then chosento be the center pixels new value. As an example, consider the following pixel and itsneighbours within a radius of 1:

    124 124 124126 >203< 203128 130 124

    The set of surrounding pixel values is thus:

    P = {124, 124, 126, 203, 128, 130, 124, 203, 124} (4.4)Sorting them according to value yields the following set:

    Psorted = {124, 124, 124, 124, 126, 128, 130, 203, 203} (4.5)

    And by choosing the value in the middle, the pixels new value is 126.

    4.2.2 Contrast Enhancement with Histogram Stretching

    While the use of IR image capturing makes the veins stand out more clearly, it is oftennecessary to further improve the contrast before segmenting the image. A simple butvery effective method to do this is histogram stretching. This method exploits the fact

    21

  • 7/30/2019 Rapport Vein

    30/93

    Chapter 4. Image Acquisition & Preprocessing

    Figure 4.10: Histogram of the sample image.

    Figure 4.11: Stretched histogram of the sample image

    that most images pixel values dont span the entire range of possible values from 0 to255. In the input images, the pixel values tend to be distributed closely together near

    the middle of the histogram, as shown in figure 4.10.Figure 4.11 shows the stretched histogram and figure 4.12 and 4.13 shows the result

    when histogram stretching is applied to the sample image

    Figure 4.12: Example of an unpro-cessed input image

    Figure 4.13: The result of apply-ing histogram stretching to the in-put image

    In the simplest form, a histogram stretching algorithm uses the lower limit a and the

    upper limit b to transform the colors in the image. All color values in between a andb will be transformed so they span the entire range from 0 to 255. The colors below aand above b will be set to 0 and 255 respectively. The first step is to find c which is themean of a and b

    c = a +b a

    2(4.6)

    Then every pixel in the image is transformed as follows:

    T(x) =

    128 + xcbc

    127, b x cxaca

    127 a x < c0 x < a

    255 x > b

    (4.7)

    22

  • 7/30/2019 Rapport Vein

    31/93

    Section 4.3. Summary & Results

    Using this method, the color space is stretched equally around the mean of the twolimits. An extension is to let c be a variable in between a and b. This allows for unevenstretching around c which is a simple form of gamma correction.

    4.3. Summary & Results

    Acquiring images of the hand is the first step of this projects system. It is followed bythe preprocessing phase, which, as previously stated, improves the quality of the imagesfor the next processing steps. The Figure 4.16 details the different blocks in the prepro-cessing part.

    Figure 4.14: Result of applications of aGaussian and a Median filter to the ROIextracted image

    Figure 4.15: Histogram stretching afterboth Gaussian and Median filters.

    When arriving to the preprocessing, the system has already performed the imagecapture and region of interest extraction by following the different constraints quicklysummed up here.

    The capture setup is designed to acquire images as shown previously in Figure 4.4. Itis composed of a modified camera, that takes picture in the NIR wavelength, and addi-tional LEDs that permit to light up the veins. After image acquirement, noise still needs

    to be removed. This is accomplised with the filters described in the preprocessing phase.As stated in the system limitations 2.3.1 on page 8, it has been decided not to implementan automatic ROI extraction, but to fix the hand at a specific place on the setup andthen cropped the image always at the same predefined place. This ROI extraction isapplied in order to extract the image part that is relevant for the recognition process.

    After the ROI extraction, the cropped version of the picture is used in the preprocess-ing. The preprocessing is composed by two low-pass filters. First a smoothing Gaussianlow-pass filter is applied to remove noises in the image. And then a median filter is usedto reduce noise due to hairs. The result of these two filters is shown in figure 4.14. At theend of the preprocessing a stretching histogram is applied in order to enhance contrast

    23

  • 7/30/2019 Rapport Vein

    32/93

    Chapter 4. Image Acquisition & Preprocessing

    and prepare the image to be segmented. The

    Finally, the output preprocessed image, shown in Figure 4.15, is sent to the segmen-tation block.

    Figure 4.16: Flow Chart of the preprocessing.

    24

  • 7/30/2019 Rapport Vein

    33/93

    CHAPTER 5Image Segmentation &

    Postprocessing

    The purpose of this chapter is to explore some of the methods that have been proposed in

    the literature, for extracting the vein pattern in an image. This process of segmentation

    is crucial to the performance of the system and great care has thus been taken to evaluate

    the available methods. Three methods have been chosen for evaluation: Repeated Line

    Tracking, Laplacian of Gaussian and Local Thresholding in conjunction with Di-

    rection Based Vascular Pattern Extraction. This chapter first reviews each method

    and then compares their performance on real input images captured by the setup described

    in chapter 4 on page 15. The method yielding the best results is then chosen for use in

    the project.

    Next a number of postprocessing methods are discussed. These are methods usedto clean up the segmented image and remove undesired elements caused by noise. It is

    then shown how a combination of these methods can improve the quality of the segmented

    image substantially.

    5.1. Repeated Line Tracking

    Repeated line tracking as described in [22] is a method of tracing lines along the veinpatterns in a preprocessed image. This process is repeated a specified number of timesuntil the vein pattern can be extracted. For every iteration, a starting point in theform of a pixel is chosen. From this point, the algorithm attempts to trace a line in thedirection of the vein pattern, based on a cross sectional profile of the image in a givendirection (shown in figure 5.1 on the next page). If the tracking point is inside a vein,the cross section will show a valley. The next tracking point is chosen in the direction ofthe deepest valley. Every time a pixel is chosen as a tracking point, its value is increasedin a corresponding pixel map. When all iterations are completed, the created pixel mapshould have high intensities within the vein boundaries.

    5.1.1 Description

    Initially, the set of pixels used as tracking points is limited to the ROI, which is definedin section 4.1.3. This region is called Rf. Next the locus space Tr is initialized, which isa discrete 2D-space of the same size as the processed image. Every entry corresponds to

    25

  • 7/30/2019 Rapport Vein

    34/93

    Chapter 5. Image Segmentation & Postprocessing

    a pixel in the image and its value is determined by the number of times that pixel hasbeen a tracking point.

    Figure 5.1: Illustration of dark line tracking [22].

    For each round of tracking, a random points is chosen from within this region Rfwhose coordinates are labelled as (xc, yc). A locus position table Tc is also initialized.This table will hold all the tracking points found in the current round. Because veinstend to move in straight lines, the moving-direction attributes Dlr and Dud are definedas follows:

    Dlr =

    (1, 0) if Rnd(2) < 1(1, 0) Otherwise

    Dud = (0, 1) if Rnd(2) < 1(0,

    1) Otherwise

    Where Rnd(n) is a uniform random value between 0 and n.

    The moving-direction attributes are determined at the beginning of every round.They are used to bias the selection of the next tracking point towards a given direction.This keeps the resulting track from curving excessively as it is forced to go in thatdirection only. The next step is, if possible, to find a new tracking point. The candidatesare the neighboring pixels that are in Rf and have not previously been assigned astracking points in the current round. This can be expressed formally as:

    Nc = Tc Rf Nr(xc, yc) (5.1)

    Where Nc is the set of pixels that are candidates for the new tracking point. Theneighboring-pixels function Nr(xc, yc) is defined as

    Nr(xc, yc) =

    N3(Dlr)(xc, yc) if Rnd(100) < plrN3(Dud)(xc, yc) if plr Rnd(100) < plr +plrN8(xc, yc) if plr +pud + 1 Rnd(100)

    (5.2)

    where:plr is the defined probability that a horizontal direction is chosenpud is the defined probability that a vertical direction is chosenN8 is the full set of 8 neighboring pixelsN3 is a limited set of 3 neighboring pixels determined by the moving-direction

    attributes Dlr and Dud

    26

  • 7/30/2019 Rapport Vein

    35/93

    Section 5.1. Repeated Line Tracking

    N3(D) is defined as follows:

    N3(D)(x, y) = {(Dx + x, Dy + y), (5.3)(Dx Dy + x, Dy Dx + y), (5.4)(Dx + Dy + x, Dy + Dx + y)

    }(5.5)

    Where D can be one of the two moving-direction attributes.In other words, the algorithm first determines the set of candidates for the new trackingpoint. If a random number between 0 and 100 is greater than plr + pud + 1 then all 8neighbouring pixels can be candidates. Otherwise the set is limited to 3 pixels specifiedby N3(D)(x, y).

    The next step is to find the direction that has the deepest valley in its cross sectionalprofile. This is done by using the line evaluation function, defined as:

    Vl =max

    (xi,yi)NcF

    xc + r cos i W

    2sin i, yc + r sin i +

    W

    2cos i

    (5.6)

    +F

    xc + r cos i W

    2 sin i, yc + r sin i W

    2 cos i

    (5.7)

    2F (xc + r cos i, yc + r sin i) (5.8)where:

    F(x, y) is a function that returns the light intensity of pixel (x, y) in the imager is the distance between (xc, yc) and the cross sectioni is the angle between the line segments (xc, yc) (xc + 1, yc) and (xc, yc)

    (xi, yi)W is the width of the cross section

    If Vl is positive, the pixel is a valid tracking point: it is added to Tc and used for the

    next iteration. If Vl is zero or negative, no other neighboring pixels are valid trackingpoints, and are thus not inside a vein. In this case, the round is finished. Tr is updatedby incrementing all values that correspond to the points added to Tc, and a new roundbegins.

    This process is repeated N times. For a higher N, the algorithm will yield a moreaccurate vein pattern but at the cost of increased computation time. An appropriatevalue for N should be found through experiments.

    Figure 5.2: Result of the repeated line tracking algorithm from [22].

    Figure 5.2 shows the results obtained by [22], using the algorithm described in theprevious section. Considering that the contrast between vein patterns and the surround-ings is quite uneven throughout the image, the result is impressive. The authors statethat at least 3000 rounds of tracking were necessary to achieve good results. Since theyused a relatively small image size, this number is expected to be higher for hand veinrecognition purposes.

    27

  • 7/30/2019 Rapport Vein

    36/93

    Chapter 5. Image Segmentation & Postprocessing

    5.2. Edge Detection Using Laplacian of Gaussian

    Edge detection using Laplacian of Gaussian filter is a commonly used method for bothedge detection and noise reduction. This is briefly described in this subsection.

    5.2.1 Concept

    The Laplacian of Gaussian filter is a combinatory filter, derived from two common imageprocessing methods: Laplacian edge detection and Gaussian noise reduction. The Lapla-cian is a 2D measure for the second spatial derivative of an image, used to segment edgesalong objects. The method is highly sensitive which means that it can segment mostedges, but is also quite vulnerable to noise. In order to compensate for this weaknessLaplacian edge detection was combined with Gaussian noise filtering, thereby suppressingnoisy edges and preserving the ones from true objects.

    5.2.2 Description

    At its core, the Laplacian of Gaussian method is a Gaussian filter applied before aLaplacian filter, which is mathematically described as:

    LOG f(x, y) = [G(x, y)] f(x, y), (5.9)

    where:LOG is the Laplacian of Gaussian operatorf(x, y) is the image space is the Laplace operatorG(x, y) is the Gaussian kernel is the standard deviation of the Gaussian operator

    The Gaussian operator is expressed as:

    G(x, y) =1

    22exp

    x

    2 + y2

    22

    , (5.10)

    where:

    x and y are the coordinates relative to the kernel center

    and the Laplacian operator is expressed as:

    =2

    x2+

    2

    y2.(5.11)

    Using this information, the LOG operator can be derived:

    28

  • 7/30/2019 Rapport Vein

    37/93

    Section 5.3. Adaptive Local Threshold

    LOG = G(x, y)

    = 2

    x2+

    2

    y2.1

    22

    expx2 + y2

    22 =

    1

    22

    x

    x

    2exp

    x

    2 + y2

    22

    +

    y

    y

    2exp

    x

    2 + y2

    22

    =1

    22

    x2 2

    4exp

    x

    2 + y2

    22

    +

    y2 24

    exp

    x

    2 + y2

    22

    =1

    22

    x2 + y2 22

    4exp

    x

    2 + y2

    22

    When used in image processing this is translated into a discrete kernel, an exampleof which is given in table 5.1.

    0 1 1 2 2 2 1 1 0

    1 2 4 5 5 5 4 2 11 4 5 3 0 3 5 4 12 5 3 12 24 12 3 5 22 5 0 24 40 24 0 5 22 5 3 12 24 12 3 5 21 4 5 3 0 3 5 4 11 2 4 5 5 5 4 2 10 1 1 2 2 2 1 1 0

    Table 5.1: Example of a 9x9 LOG kernel with = 1.4.

    In figure 5.3 is an example of a Laplacian of Gaussin edge detection shown.

    Figure 5.3: An image of a vein pattern, before and after it is edge detected using Laplacianof Gaussian.

    5.3. Adaptive Local Threshold

    Thresholding creates binary image from grey-level ones by turning all pixels below acertain fixed value, called a threshold, to zero and all pixels above to one. If g(x, y)is the thresholded version of an input image f(x, y) with some threshold T then the

    29

  • 7/30/2019 Rapport Vein

    38/93

    Chapter 5. Image Segmentation & Postprocessing

    thresholding process can be described as:

    g(x, y) = 1, if f(x, y) T0, otherwise

    (5.12)

    Due to the fact that the grey-level intensity values of the veins vary across the image,global thresholding doesnt provide satisfactory results [14]. The figure 5.4 below showsthe results after a global threshold and a local threshold on a pre-processed image usingMatlab functions. Local thresholding will be described later in this section.

    Figure 5.4: Illustration of global and local threshold. (a) Original image. (b) Globalthresholded image. (c) Local thresholded image.

    As it can be seen in figure 5.4(b), the vein pattern isnt segmented well. Thereforelocal thresholding is a preferable binarization method.

    5.3.1 Concept

    Local adaptive thresholding selects an individual threshold t(x, y) for every pixel basedon its local neighbors. The algorithm chooses different threshold values for each pixelbased on the analysis of its surrounding neighbors. The pixels are defined as being in apixels neighborhood if they lie in the quadratic kernel, with the pixel in questionas center. The threshold is found by calculating the mean of the neighboring pixel values[17].

    If we take t(x, y) as the threshold of each pixel of an input image f(x, y), we obtainthe thresholded version g(x, y) by comparing each pixel to its t(x, y):

    g(x, y) =

    1, if f(x, y) t(x, y)0, otherwise

    (5.13)

    The threshold t(x, y) is the sum of the values of the pixels surrounding neighborsdivided by the number of those neighboring pixels. Using the integral image method [29]

    30

  • 7/30/2019 Rapport Vein

    39/93

    Section 5.4. Direction Based Vascular Pattern Extraction

    for any grey-scale image, the threshold can be calculated for any window size usingthe following equation:

    t(x, y) =

    1

    2

    x+2

    i=x2

    y+2

    j=y2

    f(i, j) (5.14)

    where:f(i, j) is the pixel value in i, j is the window size

    5.4. Direction Based Vascular Pattern Extraction

    This method is not actually a segmentation in itself. But it is used to improve the resultsof the local threshold segmentation. Conventionally the extraction of vein pattern does

    not take into consideration directional information, therefore some losses in connectivityof the pattern are likely to occur and it will lead to an incomplete representation ofthe pattern. The connectivity losses are increased if the subjects have either a thin orcontracted vascular pattern. The Direction Based Vein Pattern Extraction (DBVPE)algorithm reduces connectivity losses, and hence improves the accuracy of the segmen-tation.

    The algorithm is divided in two filters that emphasize the vein pattern: a Row Vascu-lar Pattern Emphasizing Filter (RVPEF) and a Column Vascular Pattern EmphasizingFilter (CVPEF) which respectively does an extraction of the abscissa or ordinate vascu-lar pattern. Both filters are applied to an image, producing two outputs. These are thencombined in order to produce a final enhanced output.

    RVPEF use a M

    N kernel with horizontally oriented characteristics and CVPEF

    has the transpose of the previous kernel,that has vertically oriented characteristics,

    o(xc, yc) =

    Mx=1

    Ny=1

    z(x, y)w(x, y) (5.15)

    where:(xc, yc) are the coordinates of the pixel corresponding to the center pixel of the

    filter mask,o(x, y) is the output image,z(x, y) is the filter mask that contains the values of the corresponding pixels of

    the input image,M is the abscissa size of the mask,N is the ordinate size of the mask,

    w(x, y) is the emphasizing filter coefficients,

    In [20], the coefficients of the emphasizing filters proposed are numbers in form ofpower of two in order to be able to perform the filtering process only using binary shifts.In order to increase the computation performance, the filter can be presented as two1D-arrays. It allows to perform the filtering by using two 1D convolutions instead of a2D convolution. Therefore the complexity goes from O(MNS) to O((M+ N) S)with M and N respectively the abscissa and ordinate size of the mask and S the size ofthe image.

    The emphasizing of the row and column vascular patterns is due to the coefficientsof the kernel. The coefficients determine their coverage areas, that are complementary

    31

  • 7/30/2019 Rapport Vein

    40/93

    Chapter 5. Image Segmentation & Postprocessing

    as can be seen in figure 5.5. This figure shows that for RVPEF the values of the abscissapixels are going to have a bigger influence than those in the ordinate direction. ForCVPEF it is the opposite.

    Figure 5.5: The coverage areas of the RVPEF and the CVPEF [20]

    As proposed in [20] the two 1D kernels are:

    11 1 kernel mask A = 20 20 21 22 24 25 24 22 21 20 20171 kernel mask B = 20 20 20 20 20 21 23 24 25 24 23 21 20 20 20 20 20

    For the RVPEF the kernels used are A horizontally and BT vertically and for CVPEFare B horizontally and AT vertically. In the end the RVPEF and CVPEF output imagesare thresholded using the local threshold and combined using an OR operator. Thisresults in an output with the entire segmented vein pattern.

    The following images from Figure 5.6 show how the vertical and horizontal connec-tivity can be preserved by using the emphasizing filters.

    32

  • 7/30/2019 Rapport Vein

    41/93

    Section 5.5. Performance & Choice of Method

    Figure 5.6: Results from the direction based vein pattern extraction: (a) ROI extractedimage, (b) Segmented image with normal local threshold and highlighted are the par-tial loss in vertical connectivity, (c) Segmented image with normal local threshold andhighlighted are the partial loss in horizontal connectivity, (d) output of the RVPEF, (e)output of the CVPEF, (f) Output of the DBVPE algorithm

    .[20]

    5.5. Performance & Choice of Method

    This section reviews the performance of the segmentation methods that were describedin the previous sections. The most important properties for a segmentation methodare stability and precision. Indeed, it would be impossible to recognize a pattern ifsmall changes in input resulted in large changes in the segmented pattern. In order togauge which segmentation method produces the best results, a series of tests have beenconducted using all the methods.

    Two images of two hands have been acquired, using the setup described in section 4.1on page 15, giving a total of four pictures. These are shown in figure 5.7 through 5.10.Each image is segmented with the three different segmentation methods and the resultswill be compared afterwards.

    33

  • 7/30/2019 Rapport Vein

    42/93

    Chapter 5. Image Segmentation & Postprocessing

    Figure 5.7: First testing image. Figure 5.8: Second testing image, samehand as in figure 5.7.

    Figure 5.9: Third testing image. Figure 5.10: Fourth testing image, samehand as in figure 5.9.

    Results of the various segmentation methods are shown in figure 5.11 through 5.22.The best method regarding precision seems to be the direction based extraction, as itproduces a fairly stable matching segmented pattern for all the test figures, as it canbe seen in figure 5.19 through 5.22. Although the images contain some noise, a lot ofthis can be removed via the postprocessing methods described in section 5.6. In oneimage, the Laplacian of Gaussian edge detection seems to be incapable of producingan accurate pattern, as can be seen in figure 5.17 and 5.18. And on the other hand,repeated line tracking seems to have the opposite problem, as it produces quite bulkypatterns that seem to merge close veins and shadows together. Regarding stability therepeated line tracking seems to be a bit more stable than the direction based extraction,as the latter method is slightly vulnerable to noise on the edges of the image. Theproject group decided that a higher precision was more desired than a higher stability,therefore direction based extraction is used as the means of segmenting vein patterns for

    this project.

    34

  • 7/30/2019 Rapport Vein

    43/93

    Section 5.5. Performance & Choice of Method

    Figure 5.11: Figure 5.7 segmented by re-peated line.

    Figure 5.12: Figure 5.8 segmented by re-peated line.

    Figure 5.13: Figure 5.9 segmented by re-peated line.

    Figure 5.14: Figure 5.10 segmented by re-peated line.

    Figure 5.15: Figure 5.7 segmented byLaplacian of Gaussian edge detection.

    Figure 5.16: Figure 5.8 segmented byLaplacian of Gaussian edge detection.

    Figure 5.17: Figure 5.9 segmented byLaplacian of Gaussian edge detection.

    Figure 5.18: Figure 5.10 segmented byLaplacian of Gaussian edge detection.

    35

  • 7/30/2019 Rapport Vein

    44/93

    Chapter 5. Image Segmentation & Postprocessing

    Figure 5.19: Figure 5.7 segmented by di-rection based extraction.

    Figure 5.20: Figure 5.8 segmented by di-rection based extraction.

    Figure 5.21: Figure 5.9 segmented by di-rection based extraction.

    Figure 5.22: Figure 5.10 segmented by di-rection based extraction.

    5.6. Postprocessing

    This section describes the methods used to process the image after segmentation, in orderto reduce the effect of undesired elements such as noise. This is done with morphologicaloperators and a filter to remove small pixel blobs in the image.

    5.6.1 Morphological Operators

    Morphological methods include a lot of different image processing operations that basetheir processing on shapes. They apply a structuring element K with a center C to aninput image giving an output image of the same size; usually the input image is binary.To determine the value of every pixel in the output image, a morphological operator

    compares the value of the corresponding pixel in the input image and its neighbors tothe structuring element using a set operator (intersection, union, inclusion, complement).

    In the following subsections the basic morphological operators, dilation and erosion,are explained along with the basic operators of morphological noise removal: openingand closing.

    5.6.1.1 Dilation

    The purpose of a dilation is to dilate an object, therefore it adds pixels to the boundariesof the objects contained in the image. The number of pixel added depends on the sizeand shape of the structuring element used to process the image.

    36

  • 7/30/2019 Rapport Vein

    45/93

    Section 5.6. Postprocessing

    The dilation of an object A using a structuring element K with a center C can bewritten as:

    A K = {x : Kx A = } (5.16)

    where:x = (x1, x2) is a used to translate K

    Kx is the reflection of K around its center C translated by x

    So the dilated set is formed by all x such that A and reflected K translated by x haveat least one point in common. Therefore the rule can be explained as: for each pixel inthe output image, its state is 1 if the value of minimum one of the corresponding pixel inthe input image and its neighbors based on the structuring element is 1. The operationis illustrated in figure 5.23.

    Figure 5.23: Illustration of the dilation operator: in dark blue is the output object, inlight blue the input object, in green the kernel and its center is in purple.

    5.6.1.2 Erosion

    The purpose of an erosion is to erode an object, therefore it removes pixels from theboundaries of the objects contained in the image. The number of pixel removed dependson the size and shape of the structuring element used to process the image. The erosionof an object A using a structuring element K with a center C can be written as:

    A K = {x : Kx A} (5.17)where:

    x = (x1, x2) is a used to translate KKx is the kernel K translated by x

    So erosion of A by K is the set of all x such that K translated by x is completely containedin A. Therefore the rule can be explained as: for each pixel in the output image, its stateis 0 if the value of minimum one of the corresponding pixel in the input image and itsneighbors based on the structuring element is 0. The operation is illustrated in figure5.24.

    37

  • 7/30/2019 Rapport Vein

    46/93

    Chapter 5. Image Segmentation & Postprocessing

    Figure 5.24: Illustration of the erosion operator: in dark blue is the output object, inlight blue the input object, in green the kernel and its center is in purple.

    5.6.1.3 Opening

    The basic morphological operators, dilation and erosion, are often combined to implementnew image processing techniques as in the opening and closing which are the base ofmorphological noise removal.

    Thereby the opening operator is an erosion followed by a dilation using the samestructuring element. Its purpose is to remove small objects from the image withoutaltering the overall shape and size of the larger objects. It also smooths the contours of

    the larger objects by removing narrow protrusions and breaking narrow isthmuses. Anexample of the results obtained after opening are shown in Figure 5.25.

    Opening of an object A with a kernel K is written as:

    A K = (A K) K (5.18)

    Figure 5.25: Example of the opening operator: (a) Input image and (b) Output imageafter applying the opening operator

    5.6.1.4 Closing

    Closing is the second operator of morphological noise removal techniques. It is thecombination of dilation followed by erosion using the same kernel. Its purpose is to

    38

  • 7/30/2019 Rapport Vein

    47/93

    Section 5.6. Postprocessing

    remove small holes in the objects, it also smooths their contours by filling gaps andfusing narrow breaks as can be seen in Figure 5.26.

    Closing of an object A with a kernel K is written as:

    A

    K = (A

    K) B (5.19)

    Figure 5.26: Example of the opening operator: (a) Input image and (b) Output image

    after applying the closing operator

    5.6.2 Blob Removal

    Figure 5.27: Example of unwanted blobs appearing in a segmented image

    A blob is a cluster of interconnected pixels that have the same color. Because thesegmentation process is not always able to distinguish between veins and non-vein regionsin an image, white pixels may emerge that do not correspond to an actual vein. To remedythis problem, one can exploit that most of this noise will tend to be scattered aroundthe image as small blobs of pixels. Figure 5.27 shows an example of blobs appearing ina segmented image.

    Since the vein pattern is interconnected, it should make up the biggest blob or blobsin the segmented image (as seen in Figure 5.27 on page 43). This means that smallerunwanted blobs can be safely removed if their size is below a given percentage (eg. 10%)of the biggest blob in the image.

    39

  • 7/30/2019 Rapport Vein

    48/93

    Chapter 5. Image Segmentation & Postprocessing

    5.6.2.1 The Algorithm

    The algorithm works by iterating through the image, searching for white pixels. Whena white pixel is found, the algorithm searches the 4 adjacent pixels for additional whitepixels and adds them to a list. Next, the first pixel in the list is used as a new starting

    point and its 4 neighbors are searched. This is repeated until all white pixels in theblob have been found. Meanwhile, the algorithm keeps track of the size of the currentblob. When all blobs in the image have been identified, the ones with a size below aspecified fraction of the largest blob are removed. The following definitions are used inthe description of the algorithm:

    I: is the binary input image, defined as the matrix I (0; 1)MN.Where M and N are the dimensions of the input image.

    J: is the matrix used to keep track of which pixels have been searched.Its size is identical to I: J (0; 1)MN. At initialization it is filledwith zeros.

    BlobList: is the list of all blobs found in the image. It contains elements oftype blob.

    Blob: the descriptor for a single blob. It has the following attributes:

    Area: The total area of the blob

    PixelList: The list of coordinates of the pixels within theblob. It is defined as pixelList (x, y)n

    Current blob The pointer to the blob currently being processed.

    OpenList The list of coordinates of white pixels in a blob.

    S The set of edge pixels adjacent to the current pixel. Defined for agiven P as S = {P + (0, 1); P + (0,1); P + (1, 0); P + (1, 0)}

    Argument The argument that specifies the ratio between the largest blob inthe image and the threshold for blob removal.

    The algorithm can be described with the following pseudo code:

    Algorithm 5.6.1

    initialize and zerofill Jinitialize blobList

    for all 0 i,j < M,N doif I(i, j) = 1 AND J(i, j) = 0 then

    P (i, j)create new blob and add to blobListcurrent blob new blobadd P to openListwhile openList not empty do

    update S for new Pif I(Sn) = 1 for any 0 n 4 then

    add Sn to openListend ifJ(P) 1add P to current blob.pixelList

    40

  • 7/30/2019 Rapport Vein

    49/93

    Section 5.7. Summary and Results

    P first element of openListremove first element from openListcurrent blob.area++

    end whileend if

    end forlargest area max(blobList.area)for all elements in blobList do

    if area < (largest area argument) thenfor all elements in pixelList do

    I(pixelList(n)) 0end for

    end ifend for

    5.7. Summary and Results

    As said previously, Direction Based Vascular Pattern Extraction method, implementedby Im, Choi and Kim [20], was chosen to be the basis of the image segmentation. Thissubsection exposes in details the different steps of the segmentation coming right afterpreprocessing and illustrated in Figure 5.28.

    When arriving to the segmentation part, the system has already done the imagecapture, region of interest extraction and preprocessing. Therefore the direction basedemphasizing filter receives a preprocessed image I. At this point the image is given totwo linear filters one with a 11 17 kernel that emphasizes the horizontal connectivityand the other with a 17 11 kernel that emphasizes the vertical connectivity. Twoseparate images form the output of the two filters. Then a local threshold is applied on

    both matrices along with morphological operators: first an opening operator to removenoise and then a closing one to reconnect the pattern. At the end of the direction basedemphasizing filter an OR operator is applied onto the two binary matrices in order tosend one single image to post processing that takes into consideration the horizontal andvertical emphasized connectivity. The parameters for the local thresholding method havebeen tuned to yield the best result. The details can be found in appendix A.1 on page 79.

    The post-processing is composed by morphological operators: opening and closingthat remove noise and consolidate the pattern. Appropriate kernel sizes for these opera-tors have been found through testing. More details on this can b e found in appendix A.2on page 80. Some other important implementation details can be seen in appendix B.In order to remove noises that do not belong to the vein pattern, a blob removal filter isapplied, it removes the blobs that are less than 5% of the largest blob in the image. The

    results of postprocessing applied to figure 5.19, 5.20, 5.21 and 5.22 can be seen in figure5.29, 5.30, 5.31 and 5.32.

    41

  • 7/30/2019 Rapport Vein

    50/93

    Chapter 5. Image Segmentation & Postprocessing

    Figure 5.28: Flow chart of the image segmentation block

    The post-processing is composed by morphological operators: opening and closingthat remove noise and consolidate the pattern. Appropriate kernel sizes for these opera-tors have been found through testing. More details on this can b e found in appendix A.2on page 80. Some other important implementation details can be seen in appendix B.In order to remove noises that do not belong to the vein pattern, a blob removal filter isapplied, it removes the blobs that are less than 5% of the largest blob in the image. Theresults of postprocessing applied to figure 5.19 through 5.22 can be seen in figure 5.29through 5.32.

    42

  • 7/30/2019 Rapport Vein

    51/93

    Section 5.7. Summary and Results

    Figure 5.29: To the left is the same figureas in 5.19 and to the right is its postpro-cessed version.

    Figure 5.30: To the left is the same figureas in 5.20 and to the right is its postpro-cessed version.

    Figure 5.31: To the left is the same figureas in 5.21 and to the right is its postpro-cessed version.

    Figure 5.32: To the left is the same figureas in 5.22 and to the right is its postpro-cessed version.

    43

  • 7/30/2019 Rap