pca channel

20
PCA Channel Student: Fangming JI Student: Fangming JI u4082259 u4082259 Supervisor: Professor Supervisor: Professor Tom Geoden Tom Geoden

Upload: nero-henson

Post on 01-Jan-2016

61 views

Category:

Documents


0 download

DESCRIPTION

PCA Channel. Student: Fangming JI u4082259 Supervisor: Professor Tom Geoden. Organization of the Presentation. PCA and problems PCA channel idea Use the channel for automatic classification Channel Corrected Channel Conclusion Future work. Principle Component Analysis. A statistic tool - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: PCA Channel

PCA Channel

Student: Fangming JI u4082259Student: Fangming JI u4082259

Supervisor: Professor Tom GeodenSupervisor: Professor Tom Geoden

Page 2: PCA Channel

Organization of the Presentation

PCA and problemsPCA and problems PCA channel ideaPCA channel idea Use the channel for automatic classificationUse the channel for automatic classification ChannelChannel Corrected ChannelCorrected Channel ConclusionConclusion Future workFuture work

Page 3: PCA Channel

Principle Component Analysis

A statistic toolA statistic tool Maximizes the scatter of all projected Maximizes the scatter of all projected

samples in the image space.samples in the image space. Tries to capture the most important features Tries to capture the most important features

and reduce the dimensions at the same timeand reduce the dimensions at the same time Each eigenvector is a principle componentEach eigenvector is a principle component

Page 4: PCA Channel

Algorithm of PCA Given a training set of Given a training set of MM images with the same size, convert each of images with the same size, convert each of

them into a single dimension vector them into a single dimension vector (I(I11, I, I22, … I, … Imm)) Then, find the Then, find the average image by calculating the mean of the training setaverage image by calculating the mean of the training set Ψ = (∑I Ψ = (∑Inn) / ) / M, n = 1, …mM, n = 1, …m. Each training image differs from the average . Each training image differs from the average by by ΦΦnn = I = Inn - Ψ. - Ψ. Then, the covariance matrix Then, the covariance matrix CC is found by is found by

where where A = [ΦA = [Φ11, Φ, Φ22, … Φ, … Φmm]] and C is a matrix. It is too big to be and C is a matrix. It is too big to be used in practice. But fortunately, there are only used in practice. But fortunately, there are only M-1M-1 non-zero non-zero eigenvalues and they can be found more efficiently with an eigenvalues and they can be found more efficiently with an M x M M x M computation. This means that we can compute the eigenvector computation. This means that we can compute the eigenvector vvii of of instead of computing the eigenvector instead of computing the eigenvector uuii of . Also we can notice that of . Also we can notice that the the MM best eigenvalues of are equal to the M best eigenvalues of . best eigenvalues of are equal to the M best eigenvalues of . Then we can get Then we can get MM best eigenvalues of best eigenvalues of uuii by by AvAvi. i. At the end we will At the end we will select a value select a value KK, to keep only , to keep only KK largest eigenvalues. largest eigenvalues.

Page 5: PCA Channel

Eigenfaces

Page 6: PCA Channel

Problems of PCA based methods

Avalanche disasterAvalanche disaster Up to a certain limit, these methods are Up to a certain limit, these methods are

robust over a wide range of parameter.robust over a wide range of parameter. Algorithm breaks down dramatically Algorithm breaks down dramatically

beyond that point beyond that point

Page 7: PCA Channel

Constant Features and Inconstant Features

Holistic featuresHolistic features = Local features + inconstant features= Local features + inconstant features Local Features (constant features)Local Features (constant features) Inconstant features (such as view, illumination and Inconstant features (such as view, illumination and

expressions)expressions) Little change from inconstant => Little change for Little change from inconstant => Little change for

holistic oneholistic one Great change of inconstant => maybe great change Great change of inconstant => maybe great change

for the holistic onefor the holistic one

Page 8: PCA Channel

Distribution in the Image Space

Images from the same personality may sit in Images from the same personality may sit in totally different regions of the images totally different regions of the images space.space.

Distance between the images beyond the Distance between the images beyond the range of being correctly recognizedrange of being correctly recognized

Page 9: PCA Channel

The PCA Channel Holistic features = Local features + Holistic features = Local features +

inconstant featuresinconstant features Positions decided by both local features and Positions decided by both local features and

inconstant featuresinconstant features Incremental changes in the inconstant Incremental changes in the inconstant

features, should produce incremental features, should produce incremental changed holistic features or positionschanged holistic features or positions

This incremental changed position looks This incremental changed position looks like a channel so we call it “PCA Channel”like a channel so we call it “PCA Channel”

Page 10: PCA Channel

Experiment Preparation And Tools

Collecting images with incremental changes in the Collecting images with incremental changes in the orientations -- Mingtao’s softwareorientations -- Mingtao’s software

45 images from three identities (15 images for 45 images from three identities (15 images for each identity which are changed incrementally in each identity which are changed incrementally in orientation) orientation)

Dozens of images from another three identities, Dozens of images from another three identities, randomly oriented with some expression imagesrandomly oriented with some expression images

Face Recognition Practitioner – Software Face Recognition Practitioner – Software developed by medeveloped by me

Page 11: PCA Channel

Existence of The Channel

Take view for exampleTake view for example

Page 12: PCA Channel

Automatic Image Classification

1)Given an input 1)Given an input image image

2)Recognize it 2)Recognize it 3) Compute the PCA 3) Compute the PCA

again with the new again with the new recognized image recognized image

4) Go to step 1) 4) Go to step 1)

1) Give an input image 1) Give an input image 2) Recognize it 2) Recognize it 3) Put it into the 3) Put it into the

training set training set 4) Go to step 1) 4) Go to step 1)

Original PCA methodOriginal PCA method The PCA channel methodThe PCA channel method

Page 13: PCA Channel

Performance Comparison

If the training set is carefully selected the If the training set is carefully selected the performance of PCA channel is better than performance of PCA channel is better than the original onethe original one

Problems:Problems: Sensitive to the selection of the training setSensitive to the selection of the training set Contagious problemContagious problem

Page 14: PCA Channel

Contagious Problem

Page 15: PCA Channel

The Corrected PCA Channel

Cut off the root of the mismatchingCut off the root of the mismatching Improve the robustnessImprove the robustness

Page 16: PCA Channel

Implementation

Set up two threshold: Low(L) and High(H)Set up two threshold: Low(L) and High(H) If the distance between the input image and the its If the distance between the input image and the its

nearest image in the training set < L, recognize it. nearest image in the training set < L, recognize it. If the distance > H, put it for future recognition; if If the distance > H, put it for future recognition; if L < distance < H, make it a new group.L < distance < H, make it a new group.

Calculate the PCA again and cut off the Calculate the PCA again and cut off the mismatching at heremismatching at here

Match againMatch again

Page 17: PCA Channel

Results

The success rate = Match to Original The success rate = Match to Original Training Set + Match to New GroupTraining Set + Match to New Group

The success rate = 44.15%+50.65% = The success rate = 44.15%+50.65% = 94.80%94.80%

The success rate = 44.15%+51.94% = The success rate = 44.15%+51.94% = 96.09%96.09%

59.74% 59.74%

Page 18: PCA Channel

New Groups

Page 19: PCA Channel

Conclusion

Properly build up image database and the PCA Properly build up image database and the PCA channel with cautious implementation, we can get channel with cautious implementation, we can get very good performance for face recognition. very good performance for face recognition.

But from the above experiment we can see that, But from the above experiment we can see that, the strength but also the weakness of the PCA the strength but also the weakness of the PCA channel is the images database. channel is the images database.

3D face reconstruction system.3D face reconstruction system. Large computational load. But it can also be Large computational load. But it can also be

appropriate in some situations where the focus is appropriate in some situations where the focus is more on accuracy than response time. more on accuracy than response time.

Page 20: PCA Channel

Future Works

Verify Our Research On Larger Data Set Verify Our Research On Larger Data Set Preprocess the images before recognitionPreprocess the images before recognition Build Up a 3D-Face Morphable Model Build Up a 3D-Face Morphable Model

System System Research in Hybrid Methods Research in Hybrid Methods