support vector machines (vapnik, 1979)web.cecs.pdx.edu/~mm/aifall2011/svms.pdf · support vector...

34
Support Vector Machines (Vapnik, 1979) Assume a binary classification problem. Instances are represented by vector x n . Training examples: x = (f 1 , f 2 , …, f n ) S = {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n ) | (x i , y i )n ×{+1, -1} – Hypothesis: A function h: n {+1, -1}. h(x) = h(f 1 , f 2 , …, f n ) {+1, -1}

Upload: others

Post on 04-Nov-2019

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Support Vector Machines (Vapnik, 1979)

•  Assume a binary classification problem.

–  Instances are represented by vector x ∈ ℜn.

–  Training examples: x = (f1, f2, …, fn)

S = {(x1, y1), (x2, y2), ..., (xn, yn) | (xi, yi)∈ ℜn ×{+1, -1}

–  Hypothesis: A function h: ℜn→{+1, -1}.

h(x) = h(f1, f2, …, fn) ∈{+1, -1}

Page 2: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Here, assume positive and negative instances are to be separated by the hyperplane

w⋅ x + b = 0

w⋅ x + b = w1 f1 + w2 f2 + b = 0

Equation of line:

+ + +

+

-

- -

-

f1

f2

Page 3: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Intuition: the best hyperplane (for future generalization) will “maximally” separate the examples

w⋅ x + b = 0

+ + +

+

-

- -

-

f1

f2

Page 4: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  The margin of the positive examples, d+ with respect to that hyperplane, is the shortest distance from a positive example to the hyperplane:

+ + +

+

-

-

-

-

d+

f1

f2

-

Definition of Margin

Page 5: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  The margin of the negative examples, d- with respect to that hyperplane, is the shortest distance from a negative example to the hyperplane:

+ + +

+

-

-

-

-

d+ -

d-

f1

f2

Page 6: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  The margin of the training set S with respect to the hyperplane is d+ + d- .

+ + +

+

-

-

-

-

d+ -

d-

f1

f2

Page 7: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  The margin of the training set S with respect to the hyperplane is d+ + d- .

+ + +

+

-

-

-

-

d+ -

d-

f1

f2 Vapnik showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

Page 8: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  The margin of the training set S with respect to the hyperplane is d+ + d- .

+ + +

+

-

-

-

-

d+ -

d-

f1

f2 Vapnik showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

This is an optimization problem!

Page 9: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Note that the hyperplane is defined as

•  To make the math easier, we will rescale w and b such that the hyperplane is halfway in-between the closest positive and negative examples, and

w⋅ x + b = 0

Page 10: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

From M. A. Hearst et al. paper (on class web page)

Page 11: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  In this case, the margin is

•  So to maximize the margin, we need to minimize .

Page 12: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Minimizing the margin

Find w and b by doing the following minimization:

This is a quadratic optimization problem. Use “standard optimization tools” to solve it.

Page 13: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Dual formulation: It turns out that w can be expressed as a linear combination of a small subset of the training examples: those that lie exactly on margin (minimum distance to hyperplane):

such that xi lie exactly on the margin.

•  These training examples are called “support vectors”. They carry all relevant information about the classification problem.

Page 14: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  The result of the SVM training algorithm (involving solving a quadratic programming problem), is the αi’s and the xi’s.

Page 15: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  For a new example x, We can now classify x using the support vectors:

•  This is the resulting SVM classifier.

Page 16: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Non-linearly separable training examples •  What if the training examples are not linearly separable?

•  Use old trick: find a function that maps points to a higher dimensional space (“feature space”) in which they are linearly separable, and do the classification in that higher-dimensional space.

Page 17: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Need to find a function Φ that will perform such a mapping: Φ: ℜn→ F

Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space.

Page 18: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Problem:

–  Recall that classification of instance x is expressed in terms of dot products of x and support vectors.

–  The quadratic programming problem of finding the support vectors and coefficients also depends only on dot products between training examples, rather than on the training examples outside of dot products.

Page 19: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

–  So if each xi is replaced by Φ(xi) in these procedures, we will have to calculate a lot of dot products,

Φ(xi)⋅ Φ(xj)

–  But in general, if the feature space F is high dimensional, Φ(xi) ⋅ Φ(xj) will be expensive to compute.

–  Also Φ(x) can be expensive to compute

Page 20: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Second trick:

–  Suppose that there were some magic function,

k(xi, xj) = Φ(xi) ⋅ Φ(xj)

such that k is cheap to compute even though Φ(xi) ⋅ Φ(xj) is expensive to compute.

–  Then we wouldn’t need to compute the dot product directly; we’d just need to compute k during both the training and testing phases.

–  The good news is: such k functions exist! They are called “kernel functions”, and come from the theory of integral operators.

Page 21: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Example: Polynomial kernel:

Suppose x = (x1, x2) and y = (y1, y2).

Page 22: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Recap of SVM algorithm

Given training set S = {(x1, y1), (x2, y2), ..., (xm, ym) | (xi, yi)∈ ℜn ×{+1, -1}

1.  Choose a map Φ: ℜn→ F, which maps xi to a higher dimensional feature space. (Solves problem that X might not be linearly separable in original space.)

2.  Choose a cheap-to-compute kernel function k(x,z)=Φ(x) ⋅ Φ(z)

(Solves problem that in high dimensional spaces, dot products are very expensive to compute.)

3.  Map all the xi’s to feature space F by computing Φ(xi).

Page 23: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

4.  Apply quadratic programming procedure (using the kernel function k) to find a hyperplane (w, w0), such that

where the Φ(xi)’s are support vectors, the αi’s are coefficients, and w0 is a threshold, such that (w,w0 ) is the hyperplane maximizing the margin of S in F.

Page 24: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

•  Now, given a new instance, x, find the classification of x by computing

Page 25: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Demo: Spam classification using SVMs

Page 26: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Example: Applying SVMs to text classification

(Dumais et al., 1998)

•  Used Reuters collection of news stories.

•  Question: Is a particular news story in the category “grain” (i.e., about grain, grain prices, etc.)?

•  Training examples: Vectors of features such as appearance or frequency of key words. (Similar to our spam-classification task.)

•  Resulting SVM: weight vector defined in terms of support vectors and coefficients, plus threshold.

Page 27: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Results

Page 28: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Precision / Recall

•  Confusion matrix for a classifier:

Classified Positive

Classified Negative

Positive Examples True positives

(TP) False negatives

(FN)

Negative Examples False positives

(FP) True negatives

(TN)

Page 29: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Some performance measures

•  Accuracy: proportion of classifications, over all the N examples, that were correct:

•  Recall (or true positive rate, or “detection rate”): Proportion of positive examples that were classified correctly:

•  Precision : Proportion of correct positive classifications over all positive classifications:

Page 30: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Example Test data Correct Classification Model’s Classification

x1 T T

x2 T F

x3 F T

x4 F F

x5 F T

x6 F F

x7 F F

x8 F T

Accuracy = Recall = Precision =

Page 31: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Interpretation of precision and recall

•  Precision and recall are often plotted against one another, especially in “detection” applications (such as spam detection), when positive examples are sparse in the observed data.

•  Recall: How often did the system correctly identify positive examples when it encountered them?

•  Precision: How often did the system get positive classifications correct?

•  How do these two measures trade off against one another?

Page 32: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

Precision / Recall Curves

Page 33: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

SVMs also used in Watson for question classification

e.g., see Moschitti et al., Using syntactic and semantic structural kernels for classifying definition questions in Jeopardy!

Page 34: Support Vector Machines (Vapnik, 1979)web.cecs.pdx.edu/~mm/AIFall2011/SVMs.pdf · Support Vector Machines (Vapnik, 1979) • Assume a binary classification problem. – Instances

SVM Demo