lip recognition

33
Lips Recognition Presented by- Piyush Mittal (211CS2281) Information Security Computer Science and Engineering Department 06/24/12 1 Based on DTW Algorithm

Upload: piyush-mittal

Post on 02-Dec-2014

1.386 views

Category:

Technology


4 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Lip recognition

Lips Recognition

Presented by-

Piyush Mittal (211CS2281)

Information SecurityComputer Science and Engineering Department06/24/12 1

Based on DTW Algorithm

Page 2: Lip recognition

Overview

French criminologist, Edmond Locard, first recommended the use of lip prints for criminal identification in 1932.Lip prints are impressions of human lips left on the objects such as drinking glasses, cigarettes, drink containers, aluminium foils, etc.Study of human lips as a means of personal identification was started in 1970s by two Japanese scientists Yasuo and Kazuo Suzuki.The uniqueness of lip prints makes cheiloscopy especially effective when evidence is discovered at the crime scene such as for example lipstick blot marks, cups or glasses and even envelopes.

06/24/12

2

National Institute of Technology, Rourkela

Page 3: Lip recognition

OverviewSimilarly to fingerprint patterns, lip prints have the following particular properties: permanence, indestructibility and uniqueness. Lip prints are genotypically determined and therefore and unique and stable throughout life of human being. Additionally, lip prints are not only unique to an individual but also offer the potential for recognition of an individual’s gender. The lip imprints can be captured by special police materials (paper, special cream and magnetic powder). Such obtained imprint pictures are then scanned.

06/24/12

3

National Institute of Technology, Rourkela

Page 4: Lip recognition

FEATURE EXTRACTION

06/24/12

4

National Institute of Technology, Rourkela

Page 5: Lip recognition

1 Image normalization

06/24/12

5

National Institute of Technology, Rourkela

Page 6: Lip recognition

1.1 Detection of lip area It consists of several steps –

In the first step, normalization of the image histogram is carried out. Then, pixels whose value is greater than the accepted threshold (180) are converted to the white color. Next, median filter with mask 7×7 is used to blur the image. In the last step, binarization is conducted according to the following formula:

I BIN ( x, y ) = 1 − round(

)

where: I ( x, y )– value of the pixel at coordinates (x,y) before

Binarization, I AVG– average value of the all image pixels before binarization, I BIN ( x, y ) – value of the pixel at coordinates (x,y) after

binarization.

The value of 0.516 in the formula was experimentally determined.

06/24/12

6

AVGI

yxI ),(.516.0

National Institute of Technology, Rourkela

Page 7: Lip recognition

06/24/12

7

National Institute of Technology, Rourkela

Page 8: Lip recognition

1.2 Separation of Upper and Lower Lip

Separation is determined by a curve that runs through the centre of the space between the lips Designated curve divides the lip print into an upper and lower lip.

06/24/12

8

National Institute of Technology, Rourkela

Page 9: Lip recognition

1.3 Lip Print Rotation The curve obtained in the previous stage is then

approximated by a straight line (Fig. 3a). For a given straight line equation, a rotation angle towards the X – axis can be determined. It allows obtaining a separation line which will be parallel to the Cartesian OX axis. Rotated lip print image is shown in Fig. 3b.

06/24/12

9

National Institute of Technology, Rourkela

Page 10: Lip recognition

Based on the data obtained in the steps (1)-(3) we get a lip print image rotated and divided into upper and lower lip (Fig. 4).

06/24/12

10

National Institute of Technology, Rourkela

Page 11: Lip recognition

2. Lip pattern extraction06/24/12

11

National Institute of Technology, Rourkela

Page 12: Lip recognition

2.1 Lip pattern smoothing This process aims to improve the quality

level of the lines forming the lip pattern. The smoothing masks 5×5 are depicted in the Fig. 5.

06/24/12

12

National Institute of Technology, Rourkela

Page 13: Lip recognition

The procedure is repeated for the all masks depicted on the Fig. 5. Then, the mask with the largest cumulative value of the sum is ultimately selected. For the selected in the previous step mask, the average value of the pixels lying on the elements of the mask is calculated and copied to the central point of the analyzed source image. The effect of the image smoothing inside of the interest region is shown in Fig. 6.

06/24/12

13

National Institute of Technology, Rourkela

Page 14: Lip recognition

2.2 Top-hat transformation

The purpose of this procedure is to emphasize lines of the lip pattern and separate them from the background. To increase effectiveness of the algorithm, transformation is applied twice using different mask sizes. The following masks are used: 2×2 to highlight thin lines (up to 3 pixels) and 6×6 to highlight thick lines (more than 3 pixels). The results of the top-hat transformation are depicted in the Fig. 7.

06/24/12

14

National Institute of Technology, Rourkela

Page 15: Lip recognition

2.3 Binarization

This procedure is applied according to the formula below for both images resulted from the top-hat transformation. For the thin lines binarization threshold value was set to t=15, while for the thick lines this parameter was established to t=100.

IBIN(x,y) = 1 for I(x,y)>t 0 for I(x,y)<=t

where: I(x,y) – value of the pixel at the coordinates (x, y) before

binarization, t – binarization threshold, IBIN(x,y)-value of the pixel at the coordinates (x, y) after

binarization.

06/24/12

15

National Institute of Technology, Rourkela

Page 16: Lip recognition

The effect of the lip print image binarization is shown in Fig. 8.

06/24/12

16

National Institute of Technology, Rourkela

Page 17: Lip recognition

In the last stage, sub-images for the thin and thick lines are combined into a single image, and then the obtained global image is denoised. For the noise reduction, appropriate 7×7 dimensional masks have been designed. It is depicted on Fig.9.

06/24/12

17

National Institute of Technology, Rourkela

Page 18: Lip recognition

For each of the masks number of black pixels in the highlighted area of the mask is counted. If the number of the black pixels is less than 5, then the central pixel of the mask is converted to the white color.

Additionally, the area of the 11×11 pixels around the central point of the mask is searched. If there are less than 11 pixels inside of defined area, then the value of the central point of the mask is converted to the white color. Example of the noise reduction is shown in the Fig.10.

06/24/12

18

National Institute of Technology, Rourkela

Page 19: Lip recognition

06/24/12

19

National Institute of Technology, Rourkela

Page 20: Lip recognition

3 Feature extraction

The feature extraction algorithm is carried out for both the upper and lower lip. This process relies on determination of the vertical, horizontal and diagonal projections of the lip pattern image. The exemplary projections of the image lip print pixels towards the appropriate axes are presented in Fig.11.

Projections are one-dimensional vectors represented in a form of specialized histograms. Each projection shows number of the black pixels which lie towards the appropriate direction: horizontal, vertical, oblique for 45° and 135°angles.

06/24/12

20

National Institute of Technology, Rourkela

Page 21: Lip recognition

06/24/12

21

National Institute of Technology, Rourkela

Page 22: Lip recognition

THE DTW METHOD Two sequences Q={q1, …, qn} and U={u1, …, um}

being compared, the D matrix of the size n×m is built in the first stage. It allows to align the two sequences Q and U. The matrix element D(i, j) contains the distance between the points qi and uj, so D(i, j)=d(qi,uj).

In this study, the Euclidean distance was applied.

On the basis of the elements D(i, j) so-called sequences matching cost have to be determined. When cost matching is lower then both sequences Q and U are more similar.

06/24/12

22

National Institute of Technology, Rourkela

Page 23: Lip recognition

In the next stage, the warping path W is determined. The path W consists of a set of the some elements of the matrix D what allows to define a mapping between the sequences Q and U.

The warping path can be determined as follows:

W=w1,w2,...,wl , max(n,m) ≤l≤n+m−1

The wh element of the path W is defined as:

Wh =D(i ,j), h=1,....l i=1,.....,n j=1,.....,m

06/24/12

23

National Institute of Technology, Rourkela

Page 24: Lip recognition

06/24/12

24

A correctly determined path W has to fulfill a fewconditions:

The first element of the sequence Q must be matched to

the first element of the sequence U: w1 =w(1,1)=D(1,1)

The last element of the sequence Q must be matched to

the last element of the sequence U: wl=w(n , m)=D(n , m)

Next assignments in the path cannot concern elements

of sequences that are distant from each other more than

one instant t: it - it-1<=1 and jt - jt-1<=1

Points of the warping path W must be arrangedmonotonically in time: it - it-1 >=0 and jt - jt-1 >=0

National Institute of Technology, Rourkela

Page 25: Lip recognition

The D matrix together with the warping path for two sample sequences is shown in Fig. 12.

06/24/12

25

National Institute of Technology, Rourkela

Page 26: Lip recognition

The elements wk of the path W can be found very efficiently using dynamic programming. The path W determination starts from the upper right corner of the populated matrix D. In the first step i=m and j=n, so wl = D(n,m) . Then the next coordinates of the cell of the matrix D will be fixed from the formula:

06/24/12

26

National Institute of Technology, Rourkela

Page 27: Lip recognition

Now, on the basis of the all elements w1,w2,…,wl of the path W the total (cumulative) matching cost γ can be calculated:

06/24/12

27

National Institute of Technology, Rourkela

Page 28: Lip recognition

Comparison of the lip print projections was done usingthe following algorithm:

1.Matching of horizontal, vertical and oblique (angle of 45° and 135°) projections from the tested and template lip prints using the DTW algorithm (separately for the upper and lower lip).

2. Computation of the matching cost of all corresponding projections by means of the formula (i,j) and averaging the result.

06/24/12

28

National Institute of Technology, Rourkela

Page 29: Lip recognition

DTW path for projections of two different sample lipprints are shown in the Fig.13.

06/24/12

29

National Institute of Technology, Rourkela

Page 30: Lip recognition

CONCLUSIONS AND FUTURE WORKS

Considering this fact it can be stated that the results obtained by the proposed method are good and indicate the possibility of using this approach in forensic identification systems.

In future studies, further improvement of lip print image quality will be also performed. It is also planned to compare a larger number of projections generated for different angles.

Additionally, are planed studies where only part of the lip print will be analyzed .

06/24/12

30

National Institute of Technology, Rourkela

Page 31: Lip recognition

REFERENCES• Lukasz Smacki, Krzysztof Wrobel, Piotr Porwik, “Lip

Print Recognition Based on DTW Algorithm,” Department of Computer Systems, University of Silesia, Katowice, Poland, 2011

• E.J. Keogh, and M.J. Pazzani, “Computer Derivative Dynamic Time Warping,” Proc. First SIAM International Conference on Data Mining, Chicago, USA, 2001, pp. 1-11.

06/24/12

31

National Institute of Technology, Rourkela

Page 32: Lip recognition

Any Suggestions?

06/24/12

32

National Institute of Technology, Rourkela

Page 33: Lip recognition

06/24/12

33

For more information visit- www.piyushmittal.in

National Institute of Technology, Rourkela