"principal component analysis - the original paper" presentation @ papers we love...

17
Principal Component Analysis – the original paper ADRIAN FLOREA PAPERS WE LOVE – BUCHAREST CHAPTER – MEETUP #12 DECEMBER 16 TH , 2016

Upload: adrian-florea

Post on 09-Feb-2017

89 views

Category:

Software


0 download

TRANSCRIPT

Page 1: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Principal Component Analysis – the original paperADRIAN FLOREAPAPERS WE LOVE – BUCHAREST CHAPTER – MEETUP #12DECEMBER 16 T H , 2016

Page 2: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Karl Pearson, 1901

K. Pearson, "On lines and planes of closest fit to systems of points in space",The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, Sixth Series, 2, pp. 559-572 (1901)

(1857-1936)

Page 3: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Dimensionality reduction (Pearson’s words)

(1901)

Page 4: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Dimensionality reduction (115 years later)

(2016)

71291 points200 features2 principal components

https://projector.tensorflow.org

Page 5: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Dimensionality reduction

n

d

n

r

• n observations• d possibly correlated features

• r << d• n observations• r uncorrelated features• retain most of the data variability

transform

Original data set

New data set

Page 6: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

IntroductionniRxxxx

xxxXdT

idiii

Tn

,1,)(

)(

21

21

),,,span( s.t. ,},,1|{

),,,span(

21,

21

djij

T

id

i

dd

uuuxuudiRu

eeexRx

xUaaUUxUUUUU

aUxuauauaxuuux TTT

T

ddd

1

221121

orthogonal is basis lorthonorma an are of Columns

),,,span(

Page 7: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Goal: find an “optimal” basis

),,,()(

),,,()(

21

21

21

21

d

Td

U

d

Td

uuuspanaaa

eeespanxxx

T

=

r << dpr

eser

ves m

ost o

f “va

riabi

lity”

xPxxUUxxUaxUa

aUx

uauauax

r

T

rrT

rr

T

rr

rr

'''

' 2211

'x

ra

Page 8: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

r=1 Find unitary that maximizes the variance of the projected points on it.

Projection of on

1 , uuuT

ix :u

uauxuxxuxuxuxuxxx ii

T

ii

T

i

i

T

iii

)('||||||||

||||),cos(||||||'||

uuuXuuxxn

uuxxun

xuxun

xun

xun

an

TTn

i

T

iin

i

TT

ii

T

u

n

iT

i

T

i

Tn

i i

Tn

i i

Tn

i uiu

)cov()1()(1

))((1)(1)0(1)(1

11

2

112

12

122

1 ,max uuuuTT

u

0 Assume u

Page 9: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

First principal component

0220)(0),(

),(max ),1(),(,u

uuuuuuuu

uL

uLuuuuuL

TT

TT

uu is an eigenvalue of the covariance matrix with the associated eigenvector u

122 maxmax

uuu

TT

u uuuu

The largest eigenvalue is the projected variance on the associated eigenvector that is the direction of most variance,called the first principal component.

Page 10: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest
Page 11: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

1

)(min

uu

uMSET

u The equivalent optimization problemsolved by Pearson

Page 12: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

An equivalent objective function

n

i i

T

ii

T

iiin

i iT

iiin

i i xxxxxn

xxxxn

xxn

uMSE1

21

21

)'''2||(||1)'()'(1||'||1)(

But:

n

i i

TTi

T

i

TT

iii

T

i uxuuxuuxuxxn

uMSEuxux1

2 ))())(()(2||(||1)()('

)))((||||(1)))(())((2||(||1 211

2 uxxuxn

uuuxxuuxxuxn

T

ii

T

in

i

n

i

TT

ii

TT

ii

T

i

2

const

1

22

1maxmax)(min)(min||||))(||||(1

uu

T

u

T

uu

n

iiTT

ii

T

in

iuuuuuMSE

nxuuuxxux

n

Maximizing the projected variance is equivalent with minimizing the mean squared error

2max)(min uuuuMSE

Page 13: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

r=2 We have found the vector of the most variance (r = 1 case)1u

0

1

max

1

2

uv

vvT

T

vv

vuvuuvuvuvvu

uuvuvuuvvvL

uvvvvvvL

TTTTTT

TTT

TTT

11111111

11111

1

2202222

220220

)1(),,(

vv So is an eigenvector ofv

22 maxmax

uvvthe second largest eigenvalue of

The second principal component is the eigenvector associated with the second largest eigenvalue, 2

The total variance is invariant to a change of basis!

d

i id

i i 11

2

Page 14: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

PCA algorithm1. Normalize to zero-mean, unit-variance

2. Calculate the covariance matrix of the normalized data set

3. Find all eigenvalues of and arrange them in descending order

4. Choose r for the top r eigenvalues (and corresponding eigenvectors/principal components)

021 d

rr uuuU 21 the reduced basis (of principal components)

,nixUa i

T

ri 1 , the reduced dimensionality data set

Page 15: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

How to choose r?

]9.0,7.0[ ,|min1

1

di

iir

Kaiser criterion (Henry Felix Kaiser, 1960) }1|min{ iir

}|min{ 1

dir d

i

screeplot(modelname)

Page 16: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

PCA Pros & ConsPROS

• dimensionality reduction can help learning

• removes noise

• can deal with large data sets

CONS

• hard to interpret

• sample dependent

• linear (

Page 17: "Principal Component Analysis - the original paper" presentation @ Papers We Love Bucharest

Bibliography• A.A. Farag, S. Elhabian, "A Tutorial on Principal Component Analysis" (2009) - http://dai.fmph.uniba.sk/courses/ml/sl/PCA.pdf

• N. de Freitas, "CPSC 340 - Machine Learning and Data Mining Lectures", University of British Columbia (2012) - http://www.cs.ubc.ca/~nando/340-2012/lectures.php

• I. Georgescu, “Inteligență computațională“, Editura ASE (2015)

• I.T. Jolliffe, "Principal Component Analysis", 2nd ed., Springer (2002)

• J.N. Kutz, "Data-Driven Modeling & Scientific Computation. Methods for Complex Systems & Big Data", Oxford University Press (2013)

•J. N. Kutz, "Singular Value Decomposition" - http://courses.washington.edu/am301/page1/page15/video.html

• K. Pearson, "On lines and planes of closest to systems of points in space", Philos. Mag, Sixth Series, 2, pp. 559-572 (1901) - http://stat.smmu.edu.cn/history/pearson1901.pdf

• M.J. Zaki, W. Meira Jr., “Data Mining and Analysis. Fundamental Concepts and Algorithms”, Cambridge University Press (2014)