automatic face recognition using color based segmentation and intelligent energy detection michael...

18
Automatic Face Recognition Using Color Based Segmentation and Intelligent Energy Detection Michael Padilla and Zihong Fan Group 16 EE368, Spring 2002-2003

Post on 20-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Automatic Face Recognition Using Color Based Segmentation and Intelligent Energy

Detection

Michael Padilla and Zihong Fan

Group 16EE368, Spring 2002-2003

Project Objective

Given a digital image of attractive and intelligent EE368students and teaching staff, detect the presence of faces

in the image and output their location and (if poss.) gender.

Basic System Summary

Color-spaceBased

Segmentation

MorphologicalImage

Processing

MatchedFiltering

Peak/FaceDetector

InputImage

Face Estimates

• Final System

• Initial Design Reduced Eigenface-based coordinate system defining a “face space”, each possible face a point in space. Using training images, find coordinates of faces/non-faces, and train a neural net classifier. Abandoned due to problems with neural network: lack of transparency, poor generalization. Replaced with our secondary design strategy:

H vs. S vs. V (Face vs. Non-Face)

For faces, the Hue value is seen to typically occupy values in the rangeH < 19H > 240

We use this fact to remove some of the non-faces pixels in the image.

Y vs. Cr vs. Cb

In the same manner, we found empirically that for the YCbCr spacethat the face pixels occupied the range

102 < Cb < 128125 < Cr < 160

Any other pixels were assumed non-face and removed.

R vs. G vs. B

Finally, we found some useful trends in the RGB space as well. TheFollowing rules were used to further isolate face candidates:

0.836·G – 14 < B < 0.836·G + 440.89·G – 67 < B < 0.89·G + 42

Removal of Lower Region – Attempt to Avoid Possible False Detections

Just as we used information regarding face color, orientation, and scale fromThe training images, we also allowed ourselves to make the assumption thatFaces were unlikely to appear in the lower portion of the visual field: We Removed that region to help reduce the possibility of false detections.

Morphological ProcessingStep 1: Open Operation

After removing pixels based on color space considerations,removed specs initially by use of the open operation with

a window of size 3x3.

Morphological ProcessingStep 2: Small “Blob”

Removal

Model the average size of head blobs in the trainingreference image. Remove blobs below one standard deviation.

In addition, we:• Convert to grayscale. In our case, no more color information to extract.• Apply mean removal+histogram equalization -> flatten and bring out details.

Template Design• Manually selected a number of quality faces: centered, straight, lighting, diverse.

• Measured face dimensions and used Matlab to uniformly scale and align them.

• Efforts resulted in 26 sample faces added to produce the final template.

Original Faces

Final FaceTemplate

Matched Filter Operation

Masked InputImage

ApplyMatched Filter

Compare peaksTo Threshold,

T(n)

IfPeak > Tn,

Declare face

Scale and Rotate

FaceCoordinatesPre-processing

When faces are detected, we remove the corresponding portion of themasked input image to try to avoid multiple and false detections

For each scale and rotation,the threshold, T(n), decreases

Algorithm is sensitive to errors made in the pre-processing stage.

While (remaining mask area to analyze) { for s = 1:S { % scale for r = 1:R { % rotation for thrshld = Max:Min template = temp(mother_temp, s, r); peaks = conv(mask_image, template); face = detector(peaks, thrshld); if (face) { adjust mask_image; adjust remaining mask area; } } } }}

Matched Filtering - Steps

Masked InputImage

ApplyMatched Filter

Compare peaksTo Threshold,

T(n)

IfPeak > Tn,

Declare face

Pre-processingFace

Coordinates

Face Detection Steps and Progressive

Masking

After detecting peaks at the outputof the matched filter, the followingsteps are taken:• Peaks within threshold range -> faces.• Face pixels are convolved with oval face mask of appropriate scale. Removes neighborhood of detected face pixel.• After all processing, face pixels are consolidated into blobs by dilation.• Finally, centroids of blobs deemed to be face centers.

General Results

• For training images, run time 80 – 110 sec.• Detection results range from 83% - 100%.• Main Strengths: Intuitive and (thus far) accurate.• Main Weaknesses: Sensitive to errors in pre-processing.

(Example result for Training_7.jpg)

Conclusions• In most cases, effective use of color space – face color relationships and morphological processing allowed effective pre-processing.

• For images trained on, able to detect faces with reasonable accuracy and miss and false alarm rates.

• Adaptive adjustment of template scale, angle, and threshold allowed most faces to be detected.

• Decision Feedback Masking reduced multiple and false detection rate

If additional time, would have liked to:

• Pursue the Eigenimage approach further with MRC or SVM.• Explore use of Wavelet spaces for face/gender detection.

References

• Bernd Girod, EE368 Class Lecture Notes, Spring 2002-2003

• R. Gonzalez and R. Woods, “Digital Image Processing – 2nd

Edition”, Prentice Hall, 2002

• C. Garcia et al., “Face Detection in Color Images Using Wavelet Packet Analysis”.

• M. Elad et al., “Rejection Based Classifier for Face Detection”, Pattern Recognition Letters, V.23, 2002.

17

5 (0)2 (21)16

5 (0)5 (19)15

5 (0)5 (19)14

5 (0)5 (19)13

5 (0)5 (19)12

5 (0)16 (08)11

2 (1)3 (20)10

2 (1)1 (22)9

1 (2)14 (14)8

5 (0)13 (16)7

5 (0)12 (17)6

5 (0)15 (13)5

5 (0)10 (18)4

5 (0)10 (18)3

5 (0)3 (20)2

2 (1)5 (19)1

Gender RecognitionFace Detection

17

5 (0)2 (21)16

5 (0)5 (19)15

5 (0)5 (19)14

5 (0)5 (19)13

5 (0)5 (19)12

5 (0)16 (08)11

2 (1)3 (20)10

2 (1)1 (22)9

1 (2)14 (14)8

5 (0)13 (16)7

5 (0)12 (17)6

5 (0)15 (13)5

5 (0)10 (18)4

5 (0)10 (18)3

5 (0)3 (20)2

2 (1)5 (19)1

Gender RecognitionFace Detection