support vector machines. notation assume a binary classification problem. –instances are...

47
Support Vector Machines

Upload: shon-stephens

Post on 29-Jan-2016

228 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Support Vector Machines

Page 2: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Notation

• Assume a binary classification problem.

– Instances are represented by vector x n.

– Training examples: x = (x1, x2, …, xm)

S = {(x1, y1), (x2, y2), ..., (xm, ym) | (xi, yi) n {+1, -1}}

– Hypothesis: A function h: n{+1, -1}.

h(x) = h(x1, x2, …, xn) {+1, -1}

Page 3: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• Here, assume positive and negative instances are to be separated by the hyperplane

Equation of line:

x2

x1

Page 4: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• Intuition: the best hyperplane (for future generalization) will “maximally” separate the examples.

Page 5: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Definition of Margin

Page 6: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Definition of Margin

Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

Page 7: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Definition of Margin

Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

This is an optimizationproblem!

Page 8: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• Note that the hyperplane is defined as

• To make the optimization easier, we will impose the following constraints:

Page 9: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Geometry of the Margin (www.svms.org/tutorials/Hearst-etal1998.pdf)

-

Page 10: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• In this case, the margin is

• So to maximize the margin, we need to minimize .w

Page 11: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Minimizing ||w||

Find w and b by doing the following minimization:

This is a quadratic optimization problem. Use “standard optimization tools” to solve it.

Page 12: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• Dual formulation: It turns out that w can be expressed as a linear combination of a small subset of the training examples xi: those that lie exactly on margin (minimum distance to hyperplane):

where αi ≥ 0 and where the examples xi for which αi > 0 are those that lie exactly on the margin.

• These training examples are called “support vectors”. They carry all relevant information about the classification problem.

Page 13: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• The results of the SVM training algorithm (involving solving a quadratic programming problem) are the i and the bias b.

• The support vectors are all xi such that i > 0.

Page 14: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• For a new example x, We can now classify x using the support vectors:

• This is the resulting SVM classifier.

Page 15: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

In-class exercise

Page 16: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

SVM review

• Equation of line: w1x1 + w2x2 + b = 0

• Define margin using:

• Margin distance:

• To maximize the margin, we minimize ||w|| subject to the constraint that positive examplesfall on one side of the margin, and negativeexamples on the other side:

• We can relax this constraint using “slack variables”

Page 17: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

SVM review

• To do the optimization, we use the dual formulation:

The results of the optimization “black box” are and b, where i ≥ 0.

The support vectors are all xi such that i > 0.

Page 18: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

SVM review

• Once the optimization is done, we can classify a newexample x as follows:

That is, classification is done entirely through a linearcombination of dot products with training examples. This is a “kernel” method.

Page 19: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

1 2-1-2

1

2

-2

-1

Page 20: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

1 2-1-2

1

2

-2

-1

Input to SVM optimzer:

x1 x2 class1 1 11 2 12 1 1-1 0 -10 -1 -1-1 -1 -1

Page 21: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

1 2-1-2

1

2

-2

-1

Input to SVM optimzer:

x1 x2 class1 1 11 2 12 1 1-1 0 -10 -1 -1-1 -1 -1

Output from SVM optimzer:

Support vector yα(-1, 0) -.208(1, 1) .416(0, -1) -.208

b = -.376

Page 22: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

1 2-1-2

1

2

-2

-1

Input to SVM optimzer:

x1 x2 class1 1 11 2 12 1 1-1 0 -10 -1 -1-1 -1 -1

Output from SVM optimzer:

Support vector yα(-1, 0) -.208(1, 1) .416(0, -1) -.208

b = -.376

Weight vector:

Page 23: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

1 2-1-2

1

2

-2

-1

Input to SVM optimzer:

x1 x2 class1 1 11 2 12 1 1-1 0 -10 -1 -1-1 -1 -1

Output from SVM optimzer:

Support vector yα(-1, 0) -.208(1, 1) .416(0, -1) -.208

b = -.376

Weight vector:

Separation line:

Page 24: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

1 2-1-2

1

2

-2

-1

Classifying a new point:

Page 25: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Non-linearly separable training examples• What if the training examples are not linearly separable?

Page 26: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Non-linearly separable training examples• What if the training examples are not linearly separable?

• Use old trick: find a function that maps points to a higher dimensional space (“feature space”) in which they are linearly separable, and do the classification in that higher-dimensional space.

Page 27: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Need to find a function that will perform such a mapping:

: n F

Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space.

x1

x2

x1

x2

x3

Page 28: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

Find a 3-dimensional feature space in which XOR is linearly separable.

x2

x1

(0, 1) (1, 1)

(0, 0)

(1, 0) x1

x2

x3

Page 29: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• Problem:

– Recall that classification of instance x is expressed in terms of dot products of x and support vectors.

– The quadratic programming problem of finding the support vectors and coefficients also depends only on dot products between training examples, rather than on the training examples outside of dot products.

Page 30: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

– So if each xi is replaced by (xi) in these procedures, we will have to calculate (xi) for each i as well as calculate a lot of dot products,

(xi) (xj)

– But in general, if the feature space F is high dimensional, (xi) (xj) will be expensive to compute.

Page 31: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

• Second trick:

– Suppose that there were some magic function,

k(xi, xj) = (xi) (xj)

such that k is cheap to compute even though (xi) (xj) is expensive to compute.

– Then we wouldn’t need to compute the dot product directly; we’d just need to compute k during both the training and testing phases.

– The good news is: such k functions exist! They are called “kernel functions”, and come from the theory of integral operators.

Page 32: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example: Polynomial kernel:

Suppose x = (x1, x2) and z = (z1, z2).

Page 33: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example

Does the degree-2 polynomial kernel make XOR linearly separable?

x2

x1

(0, 1) (1, 1)

(0, 0)

(1, 0)

Page 34: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Recap of SVM algorithm

Given training set

S = {(x1, y1), (x2, y2), ..., (xm, ym) | (xi, yi) n {+1, -1}

1. Choose a map : n F, which maps xi to a higher dimensional feature space. (Solves problem that X might not be linearly separable in original space.)

2. Choose a cheap-to-compute kernel function

k(x,z)=(x) (z)

(Solves problem that in high dimensional spaces, dot products are very expensive to compute.)

Page 35: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

4. Apply quadratic programming procedure (using the kernel function k) to find a hyperplane (w, b), such that

where the (xi)’s are support vectors, the i’s are coefficients, and b is a bias, such that (w, b ) is the hyperplane maximizing the margin of the training set in feature space F.

Page 36: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Wednesday, April 24

• Discussion of Decision Tree Homework

• Finish SVM slides

• SVM exercises

• Jens Burger, Phoneme Recognition with Large Hierarchical Reservoirs

Dian Chase, Support vector machines combined with feature selection for breast cancer diagnosis

• Midterm review

Page 37: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

5. Now, given a new instance, x, find the classification of x by computing

Page 38: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Page 39: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Page 40: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Page 41: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

More on Kernels• So far we’ve seen kernels that map instances in n to

instances in z where z > n.

• One way to create a kernel: Figure out appropriate feature space Φ(x), and find kernel function k which defines inner product on that space.

• More practically, we usually don’t know appropriate feature space Φ(x).

• What people do in practice is either:

1. Use one of the “classic” kernels (e.g., polynomial),

or

2. Define their own function that is appropriate for their task, and show that it qualifies as a kernel.

Page 42: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

How to define your own kernel

• Given training data (x1, x2, ..., xn)

• Algorithm for SVM learning uses kernel matrix (also called Gram matrix):

• We can choose some function k, and compute the kernel matrix K using the training data.

• We just have to guarantee that our kernel defines an inner product on some feature space.

• Not as hard as it sounds.

Page 43: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

What counts as a kernel?

• Mercer’s Theorem: If the kernel matrix K is “symmetric positive definite”, it defines a kernel, that is, it defines an inner product in some feature space.

• We don’t even have to know what that feature space is!

Page 44: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Kernel Functions as Similarity Measures

• Note that the dot product gives a measure of similarity between two vectors:

Xi

Xj

θ

So, often we can interpret a kernel function as measuring similarity between a test vector x and a training vector xi in feature space.

In this view, SVM classification of x amounts to a weighted sum of similarities between x and support vectors in feature space.

Page 45: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Structural Kernels

• In domains such as natural language processing and bioinformatics, the similarities we want to capture are often structural (e.g., parse trees, formal grammars).

• An important area of kernel methods is defining structural kernels that capture this similarity (e.g., sequence alignment kernels, tree kernels, graph kernels, etc.)

Page 46: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

From www.cs.pitt.edu/~tomas/cs3750/kernels.ppt:• Design criteria - we want kernels to be

– valid – Satisfy Mercer condition of positive semidefiniteness– good – embody the “true similarity” between objects– appropriate – generalize well– efficient – the computation of k(x,x’) is feasible

Page 47: Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,

Example: Watson used tree kernels and SVMs to classify

question types for Jeopardy! questions

From Moschitti et al., 2011