machine learning continued image source:

31
Machine learning continued Image source: https://www.coursera.org/course/ml

Upload: erin-lyde

Post on 15-Dec-2015

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine learning continued Image source:

Machine learning continued

Image source: https://www.coursera.org/course/ml

Page 2: Machine learning continued Image source:

More about linear classifiers• When the data is linearly separable, there may

be more than one separator (hyperplane)

0:negative

0:positive

b

b

wxx

wxx

Which separatoris best?

Page 3: Machine learning continued Image source:

Support vector machines• Find hyperplane that maximizes the margin

between the positive and negative examples

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

Page 4: Machine learning continued Image source:

Support vector machines• Find hyperplane that maximizes the margin

between the positive and negative examples

1:1)(negative

1:1)( positive

by

by

wxx

wxx

MarginSupport vectors

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

Distance between point and hyperplane: ||||

||

w

wx b

For support vectors, 1 bwx

Therefore, the margin is 2 / ||w||

Page 5: Machine learning continued Image source:

Finding the maximum margin hyperplane

1. Maximize margin 2 / ||w||

2. Correctly classify all training data:

Quadratic optimization problem:

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

1)(subject to2

1min

2

, by ii

bxww

w

1:1)(negative

1:1)( positive

by

by

iii

iii

wxx

wxx

Page 6: Machine learning continued Image source:

Finding the maximum margin hyperplane• Solution:

i iii y xw

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

Support vector

learnedweight

Page 7: Machine learning continued Image source:

Finding the maximum margin hyperplane• Solution:

b = yi – w·xi for any support vector

• Classification function (decision boundary):

• Notice that it relies on an inner product between the test point x and the support vectors xi

• Solving the optimization problem also involves computing the inner products xi · xj between all pairs of training points

i iii y xw

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

bybi iii xxxw

Page 8: Machine learning continued Image source:

• Datasets that are linearly separable work out great:

• But what if the dataset is just too hard?

• We can map it to a higher-dimensional space:

0 x

0 x

0 x

x2

Nonlinear SVMs

Slide credit: Andrew Moore

Page 9: Machine learning continued Image source:

Φ: x → φ(x)

Nonlinear SVMs• General idea: the original input space can

always be mapped to some higher-dimensional feature space where the training set is separable

Slide credit: Andrew Moore

Page 10: Machine learning continued Image source:

Nonlinear SVMs• The kernel trick: instead of explicitly computing

the lifting transformation φ(x), define a kernel function K such that

K(x , y) = φ(x) · φ(y)

(to be valid, the kernel function must satisfy Mercer’s condition)

• This gives a nonlinear decision boundary in the original feature space:

bKybyi

iiii

iii ),()()( xxxx

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

Page 11: Machine learning continued Image source:

Nonlinear kernel: Example

• Consider the mapping ),()( 2xxx

22

2222

),(

),(),()()(

yxxyyxK

yxxyyyxxyx

x2

Page 12: Machine learning continued Image source:

Polynomial kernel: dcK )(),( yxyx

Page 13: Machine learning continued Image source:

Gaussian kernel

• Also known as the radial basis function (RBF) kernel:

• The corresponding mapping φ(x) is infinite-dimensional!

• What is the role of parameter σ?• What if σ is close to zero?• What if σ is very large?

2

2

1exp),( yxyx

K

Page 14: Machine learning continued Image source:

Gaussian kernel

SV’s

Page 15: Machine learning continued Image source:

What about multi-class SVMs?• Unfortunately, there is no “definitive” multi-

class SVM formulation• In practice, we have to obtain a multi-class

SVM by combining multiple two-class SVMs • One vs. others

• Traning: learn an SVM for each class vs. the others• Testing: apply each SVM to test example and assign to it the

class of the SVM that returns the highest decision value

• One vs. one• Training: learn an SVM for each pair of classes• Testing: each learned SVM “votes” for a class to assign to

the test example

Page 16: Machine learning continued Image source:

SVMs: Pros and cons• Pros

• Many publicly available SVM packages:http://www.kernel-machines.org/software

• Kernel-based framework is very powerful, flexible• SVMs work very well in practice, even with very small

training sample sizes

• Cons• No “direct” multi-class SVM, must combine two-class SVMs• Computation, memory (esp. for nonlinear SVMs)

– During training time, must compute matrix of kernel values for every pair of examples

– Learning can take a very long time for large-scale problems

Page 17: Machine learning continued Image source:

Beyond simple classification: Structured prediction

Image Word

Source: B. Taskar

Page 18: Machine learning continued Image source:

Structured Prediction

Sentence Parse tree

Source: B. Taskar

Page 19: Machine learning continued Image source:

Structured Prediction

Sentence in two languages

Word alignment

Source: B. Taskar

Page 20: Machine learning continued Image source:

Structured Prediction

Amino-acid sequence Bond structure

Source: B. Taskar

Page 21: Machine learning continued Image source:

Structured Prediction• Many image-based inference tasks can loosely be

thought of as “structured prediction”

Source: D. Ramanan

model

Page 22: Machine learning continued Image source:

Unsupervised Learning

• Idea: Given only unlabeled data as input, learn some sort of structure

• The objective is often more vague or subjective than in supervised learning

• This is more of an exploratory/descriptive data analysis

Page 23: Machine learning continued Image source:

Unsupervised Learning

• Clustering– Discover groups of “similar” data points

Page 24: Machine learning continued Image source:

Unsupervised Learning

• Quantization– Map a continuous input to a discrete (more

compact) output

1

2

3

Page 25: Machine learning continued Image source:

• Dimensionality reduction, manifold learning– Discover a lower-dimensional surface on which the

data lives

Unsupervised Learning

Page 26: Machine learning continued Image source:

Unsupervised Learning• Density estimation

– Find a function that approximates the probability density of the data (i.e., value of the function is high for “typical” points and low for “atypical” points)

– Can be used for anomaly detection

Page 27: Machine learning continued Image source:

Semi-supervised learning

• Lots of data is available, but only small portion is labeled (e.g. since labeling is expensive)– Why is learning from labeled and unlabeled data

better than learning from labeled data alone?

?

Page 28: Machine learning continued Image source:

Active learning• The learning algorithm can choose its own training

examples, or ask a “teacher” for an answer on selected inputs

S. Vijayanarasimhan and K. Grauman, “Cost-Sensitive Active Visual Category Learning,” 2009 

Page 29: Machine learning continued Image source:

Lifelong learning

http://rtw.ml.cmu.edu/rtw/

Page 30: Machine learning continued Image source:

Lifelong learning

http://rtw.ml.cmu.edu/rtw/

Page 31: Machine learning continued Image source:

Xinlei Chen, Abhinav Shrivastava and Abhinav Gupta. NEIL: Extracting Visual Knowledge from Web Data. In ICCV 2013