"principal component analysis - the original paper" presentation @ papers we love...
TRANSCRIPT
Principal Component Analysis – the original paperADRIAN FLOREAPAPERS WE LOVE – BUCHAREST CHAPTER – MEETUP #12DECEMBER 16 T H , 2016
Karl Pearson, 1901
K. Pearson, "On lines and planes of closest fit to systems of points in space",The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, Sixth Series, 2, pp. 559-572 (1901)
(1857-1936)
Dimensionality reduction (Pearson’s words)
(1901)
Dimensionality reduction (115 years later)
(2016)
71291 points200 features2 principal components
https://projector.tensorflow.org
Dimensionality reduction
n
d
n
r
• n observations• d possibly correlated features
• r << d• n observations• r uncorrelated features• retain most of the data variability
transform
Original data set
New data set
IntroductionniRxxxx
xxxXdT
idiii
Tn
,1,)(
)(
21
21
),,,span( s.t. ,},,1|{
),,,span(
21,
21
djij
T
id
i
dd
uuuxuudiRu
eeexRx
xUaaUUxUUUUU
aUxuauauaxuuux TTT
T
ddd
1
221121
orthogonal is basis lorthonorma an are of Columns
),,,span(
Goal: find an “optimal” basis
),,,()(
),,,()(
21
21
21
21
d
Td
U
d
Td
uuuspanaaa
eeespanxxx
T
=
r << dpr
eser
ves m
ost o
f “va
riabi
lity”
xPxxUUxxUaxUa
aUx
uauauax
r
T
rrT
rr
T
rr
rr
'''
' 2211
'x
ra
r=1 Find unitary that maximizes the variance of the projected points on it.
Projection of on
1 , uuuT
ix :u
uauxuxxuxuxuxuxxx ii
T
ii
T
i
i
T
iii
)('||||||||
||||),cos(||||||'||
uuuXuuxxn
uuxxun
xuxun
xun
xun
an
TTn
i
T
iin
i
TT
ii
T
u
n
iT
i
T
i
Tn
i i
Tn
i i
Tn
i uiu
)cov()1()(1
))((1)(1)0(1)(1
11
2
112
12
122
1 ,max uuuuTT
u
0 Assume u
First principal component
0220)(0),(
),(max ),1(),(,u
uuuuuuuu
uL
uLuuuuuL
TT
TT
uu is an eigenvalue of the covariance matrix with the associated eigenvector u
122 maxmax
uuu
TT
u uuuu
The largest eigenvalue is the projected variance on the associated eigenvector that is the direction of most variance,called the first principal component.
1
)(min
uu
uMSET
u The equivalent optimization problemsolved by Pearson
An equivalent objective function
n
i i
T
ii
T
iiin
i iT
iiin
i i xxxxxn
xxxxn
xxn
uMSE1
21
21
)'''2||(||1)'()'(1||'||1)(
But:
n
i i
TTi
T
i
TT
iii
T
i uxuuxuuxuxxn
uMSEuxux1
2 ))())(()(2||(||1)()('
)))((||||(1)))(())((2||(||1 211
2 uxxuxn
uuuxxuuxxuxn
T
ii
T
in
i
n
i
TT
ii
TT
ii
T
i
2
const
1
22
1maxmax)(min)(min||||))(||||(1
uu
T
u
T
uu
n
iiTT
ii
T
in
iuuuuuMSE
nxuuuxxux
n
Maximizing the projected variance is equivalent with minimizing the mean squared error
2max)(min uuuuMSE
r=2 We have found the vector of the most variance (r = 1 case)1u
0
1
max
1
2
uv
vvT
T
vv
vuvuuvuvuvvu
uuvuvuuvvvL
uvvvvvvL
TTTTTT
TTT
TTT
11111111
11111
1
2202222
220220
)1(),,(
vv So is an eigenvector ofv
22 maxmax
uvvthe second largest eigenvalue of
The second principal component is the eigenvector associated with the second largest eigenvalue, 2
The total variance is invariant to a change of basis!
d
i id
i i 11
2
PCA algorithm1. Normalize to zero-mean, unit-variance
2. Calculate the covariance matrix of the normalized data set
3. Find all eigenvalues of and arrange them in descending order
4. Choose r for the top r eigenvalues (and corresponding eigenvectors/principal components)
021 d
rr uuuU 21 the reduced basis (of principal components)
,nixUa i
T
ri 1 , the reduced dimensionality data set
How to choose r?
]9.0,7.0[ ,|min1
1
di
iir
Kaiser criterion (Henry Felix Kaiser, 1960) }1|min{ iir
}|min{ 1
dir d
i
screeplot(modelname)
PCA Pros & ConsPROS
• dimensionality reduction can help learning
• removes noise
• can deal with large data sets
CONS
• hard to interpret
• sample dependent
• linear (
Bibliography• A.A. Farag, S. Elhabian, "A Tutorial on Principal Component Analysis" (2009) - http://dai.fmph.uniba.sk/courses/ml/sl/PCA.pdf
• N. de Freitas, "CPSC 340 - Machine Learning and Data Mining Lectures", University of British Columbia (2012) - http://www.cs.ubc.ca/~nando/340-2012/lectures.php
• I. Georgescu, “Inteligență computațională“, Editura ASE (2015)
• I.T. Jolliffe, "Principal Component Analysis", 2nd ed., Springer (2002)
• J.N. Kutz, "Data-Driven Modeling & Scientific Computation. Methods for Complex Systems & Big Data", Oxford University Press (2013)
•J. N. Kutz, "Singular Value Decomposition" - http://courses.washington.edu/am301/page1/page15/video.html
• K. Pearson, "On lines and planes of closest to systems of points in space", Philos. Mag, Sixth Series, 2, pp. 559-572 (1901) - http://stat.smmu.edu.cn/history/pearson1901.pdf
• M.J. Zaki, W. Meira Jr., “Data Mining and Analysis. Fundamental Concepts and Algorithms”, Cambridge University Press (2014)