support vector machines

15
Support Vector Machines Instructor Max Welling ICS273A UCIrvine

Upload: bern

Post on 04-Jan-2016

21 views

Category:

Documents


0 download

DESCRIPTION

Support Vector Machines. Instructor Max Welling ICS273A UCIrvine. Philosophy. First formulate a classification problem as finding a separating hyper-plane that maximizes “the margin”. Allow for errors in classification using “slack-variables”. Convert problem to the “dual problem”. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Support Vector Machines

Support Vector Machines

Instructor Max Welling

ICS273A

UCIrvine

Page 2: Support Vector Machines

Philosophy

• First formulate a classification problem as finding a separating hyper-plane that maximizes “the margin”.

• Allow for errors in classification using “slack-variables”.

• Convert problem to the “dual problem”.

• This problem only depends on inner products between feature vectors which can be replaced with kernels.

• A kernel is like using an infinite number of features.

Page 3: Support Vector Machines

The Margin

• Large margins are good for generalization performance (on future data).• Note: this is very similar to logistic regression (but not identical).

Page 4: Support Vector Machines

Primal Problem

• We would like to find an expression for the margin (a distance).

• Points on the decision line satisfy:

• First imagine the line goes through the origin: Then shift origin: Choose

0Tw x b

0Tw x ( ) 0Tw x a

/ / || || || || || |||| ||

T ba w b w a a w a

w

w

|| ||bw

Page 5: Support Vector Machines

Primal Problemw

|| ||bw

• Points on support vector lines (dashed) are given by:

• If I change: the equations are still valid. Thus we can choose without loss of generality.

T

T

w x b

w x b

, ,w w b b 1

Page 6: Support Vector Machines

Primal Problem

2

,

1 1|| ||min

1 1

Tn n

Tw bn n

w x b if yw subject to n

w x b if y

• We can express the margin as:

• Recall: we want to maximize the margin, such that all data-cases end up on the correct side of the support vector lines.

1 22|| || || || || ||b bw w w

w

|| ||bw

1|| ||bw

2/||w|| is always true. Checkthis also for b in (-1,0).

Page 7: Support Vector Machines

Primal problem (QP)

alternative derivationmargin

2

,

1|| ||min

2. . ( ) 1 0

w b

Tn n

w

s t y w x b n

alternative primalproblem formulation

Page 8: Support Vector Machines

Slack Variables

• It is not very realistic to assume that the data are perfectly separable.

• Solution: add slack variables to allow violations of constraints:

1 1

1 1

Tn n n

Tn n n

w x b if yn

w x b if y

• However, we should try to minimize the number of violations. We do this by adding a term to the objective:

2

, , 1

1|| ||min

2

. . ( ) 1 0

. . 0

N

nw b n

Tn n n

n

w C

s t y w x b n

s t n

Page 9: Support Vector Machines

Features

2

, , 1

1|| ||min

2

. . ( ( ) ) 1 0

. . 0

N

nw b n

Tn n n

n

w C

s t y w x b n

s t n

• Let’s say we wanted to define new features: The problem would then transform to:

2 2( ) [ , , , , ,....]x x y x y xy

• Rationale: data that is linearly non-separable in low dimensions may become linearly separable in high dimensions (provided sensible features are chosen).

Page 10: Support Vector Machines

Dual Problem

• Let’s say we wanted very many features (F>>N), or perhaps infinitely many features.

• In this case we have very many parameters w to fit.

• By converting to the dual problem, we have to deal with exactly N parameters.

• This is a change of basis, where we recognize that we only need dimensions inside the space spanned by the data-cases.

• The transformation to the dual is rooted in the theory of constrained convex optimization. For a convex problem (no local minima) the dual problem is equivalent to the primal problem (i.e. we can switch between them).

Page 11: Support Vector Machines

Dual Problem (QP)

,

1max

2

. . 0, [0, ]

Tn n m n m n m

n n m

n n nn

y y

s t y C n

• The should be interpreted as forces acting on the data-items. Think of a ball running down a hill (optimizing w over ||w||^2). When it hits a wall, the wall start pushing back, i.e. the force is active.

If data-item is on the correct side of the margin: no force active:

If data-item is on the support-vector line (i.e. it is a support vector!) The force becomes active:

If data-item is on the wrong side of the support vector line, the force is fully engaged:

n

0n

[0, ]n C

n C

0n if no slack variables

Page 12: Support Vector Machines

Complementary Slackness• The complementary slackness conditions come from the KKT conditions in convex optimization theory.

• From these conditions you can derive the conditions on alpha (previous slide)

• The fact that many alpha’s are 0 is important for reasons of efficiency.

( ( ) 1 ) 0Tn n n ny w b

Page 13: Support Vector Machines

Kernel Trick• Note that the dual problem only depends on

• We can now move to infinite number of features by replacing:

• As long as the kernel satisfies 2 important conditions you can forget about the features

• Examples:

( ) ( ) ( , )Tn m n mx x K x x

0 ( , )

( )

T

T

v Kv v positivesemi definite positive eigenvalues

K K symmetric

Tn m

2

( , ) ( )

( , ) exp( || || )

T dpol

rbf

K x y r x y

K x y c x y

Page 14: Support Vector Machines

• If we work in high dimensional feature spaces or with kernels, b has almost no impact on the final solution. In the following we set b=0 for convenience.

• One can derive a relation between the primal and dual variables (like the primal dual transformation, it requires Lagrange multipliers which we will avoid here. But see notes for background reading).

• Using this we can derive the prediction equation:

• Note that it depends on the features only through their inner product (or kernel).

• Note: prediction only involves support vectors (i.e. those vectors close to or on wrong side of the boundary). This is also efficient.

Prediction

( , )Ttest test n n test n

n SV

y sign w x sign y K x x

Page 15: Support Vector Machines

Conclusions

• kernel-SVMs are non-parametric classifiers: It keeps all the data around in the kernel-matrix.

• Still we often have parameters to tune (C, kernel parameters). This is done using X-validation or by minimizing a bound on the generalization error.

• SVMs are state-of-the-art (given a good kernel).

• SVMs are also slow (at least O(N^2)). However approximations are available to elevate that problem (i.e. O(N)).