classification applet - computer action...

22
1 Classification Applet http://www.cs.technion.ac.il/~rani/LocBoost/ Multiclass classification

Upload: others

Post on 07-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

1

Classification Applet

http://www.cs.technion.ac.il/~rani/LocBoost/

Multiclass classification

Page 2: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

2

Confusion matrix

Precision of classifier?

Recall of classifier?

Support Vector Machines

Textbook, Chapter 15

April 27, 2010

Page 3: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

3

Example

• UCI Spam data: http://archive.ics.uci.edu/ml/machine-

learning-databases/spambase/

6

“A machine with too much capacity is like a botanist with

a photographic memory who, when presented with a new

tree, concludes that it is not a tree because it has a different

number of leaves from anything she has seen before; a

machine with too little capacity is like the botanist’s lazy

brother, who declares that if it’s green, it’s a tree. Neither

can generalize well.” (C. Burges, A tutorial on support

vector machines for pattern recognition).

Page 4: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

4

Perceptrons

• Invented by Rosenblatt, 1950s

• Single-layer feedforward network:

−=

>

=

+=∑ =

otherwise 1

0 if 0

0 if 1

1

y

y

o

bxwy i

n

i i

.

.

.

w1

w2

wn

x1

x2

x n

output

This can be used for

classification:

+1 = positive, -1 = negative,

0 = “uncertain”

• Consider example with two inputs, x1, x2.

x1

x2+

++

++

+

+

Can view trained network as defining

“separation line”. What is its equation?

Page 5: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

5

• Consider example with two inputs, x1, x2.

x1

x2+

++

++

+

+

We have:

2

1

2

12

2211 0

w

bx

w

wx

bxwxw

−−=

=++

Can view trained network as defining

“separation line”. What is its equation?

• This is a “linear discriminant function” or “linear decision

surface”.

• Weights determine slope and bias determines offset.

• Can generalize to n-dimensions:

– Decision surface is an n-1 dimensional “hyperplane”.

0...2211 =++++ bxwxwxw nn

Page 6: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

6

In-class Exercises:

(a) Find weights for a two-input perceptron that

computes f(x1, x2) = x1 ∧ x2. Sketch the separation

line defined by this perceptron.

(b) Do the same, but for x1 ∨ x2.

Support Vector Machines

• Assume a linearly separable binary classification problem.

– Instances are represented by vector x∈ℜn.

– Training examples:

S = {(x1, y1), (x2, y2), ..., (xm, ym) | (xi, yi)∈ ℜn ×{+1, -1}

– Hypothesis: A function h: ℜn→{+1, -1}.

( )bxwbh i

n

i i +=+⋅= ∑ =1sgn)(sgn)( xwx

Page 7: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

7

• Positive and negative instances are to be separated by the

hyperplane

• But: Which separating hyperplane is optimal?

0=+⋅ bxw

Definition of Margin

• The margin of an instance with respect to a hyperplane is

the distance from that point to the hyperplane:

+ +

+

+

-

-

-

-

Page 8: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

8

• The margin of the positive examples, d+ with respect to

that hyperplane, is the shortest distance from a positive

example to the hyperplane:

+ +

+

+

-

-

-

-

d+-

• The margin of the negative examples, d- with respect to

that hyperplane, is the shortest distance from a negative

example to the hyperplane:

+ +

+

+

-

-

-

-

d+-

d-

Page 9: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

9

• The margin of the training set S with respect to the

hyperplane is d+ + d- .

+ +

+

+

-

-

-

-

d+-

d-

• The margin of the training set S with respect to the

hyperplane is d+ + d- .

+ +

+

+

-

-

-

-

d+-

d-

Vapnik showed that the hyperplane

maximizing the margin of S will

have minimal “VC dimension”

(complexity) in the set of all

consistent hyperplanes, and will

thus be optimal.

This is an optimization

problem!

Page 10: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

10

• Note that the hyperplane is defined as

• To make the math easier, we will put a restriction on the

set of such hyperplanes, by also requiring that:

0=+⋅ bwx

)1(y instances negativefor 1

)1(y instances positivefor 1

i

i

−=−≤+⋅

+=+≥+⋅

b

b

i

i

wx

wx

• Consider the points xi for which these are equalities:

• What is the margin of these points?

• Margin γ of training set:

(1)wx

wx

)1(y instances negativefor 1

)1(y instances positivefor 1

i

i

−=−=+⋅

+=+=+⋅

b

b

i

i

22

1 ....

11

nwwdd

++++++++============ −−−−++++

w

w

2====γ

Page 11: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

11

+

+

+

+

-

-

-

-

-

1/||w||

1/||w||

• The points xi for which (1) holds are called support

vectors.

• Goal is to maximize margin γ:

• To do this, we need to minimize ||w||2, subject to

constraints

w

2====γ

)1(y instances negativefor 1

)1(y instances positivefor 1

i

i

−=−≤+⋅

+=+≥+⋅

b

b

i

i

wx

wx

Page 12: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

12

• Goal is to find hyperplane that gives the maximum margin

of 2 / ||w||, by minimizing ||w||2 , subject to the joint

constraints given above, plus the constraint that the

training error should be as small as possible.

• Note that this is a constrained optimization problem, that

can be solved via a “constrained quadratic programming”

method.

• Support vector machines are a way of finding a (w, b) to

accomplish this.

Page 13: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

13

• It turns out that w can be expressed as a linear combination

of a small subset of the training examples: those that lie

exactly on margin (minimum distance to hyperplane).

• These training examples are called “support vectors”.

They carry all relevant information about the classification

problem.

i

i

ixw ∑∑∑∑==== α

• The result of the SVM training algorithm (involving

solving a quadratic programming problem), is the αi’s and

the xi’s.

Page 14: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

14

• For a new example x, We can now classify x using the

support vectors:

• This is the resulting SVM classifier.

+⋅= ∑ ))(sgn)(class b

i

ii xxx α

Non-linearly separable training examples• What if the training examples are not linearly separable?

• Use old trick: find a function that maps points to a higher

dimensional space (“feature space”) in which they are

linearly separable, and do the classification in that higher-

dimensional space.

Page 15: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

15

Need to find a function Φ that will perform such a mapping:

Φ: ℜn→ F

Then can find hyperplane in higher dimensional feature space

F, and do classification using that hyperplane in higher

dimensional space.

• Problem:

– Recall that classification of instance x is expressed in

terms of dot products of x and support vectors.

– The quadratic programming problem of finding the

support vectors and coefficients also depends only on

dot products between training examples, rather than on

the training examples outside of dot products.

+⋅= ∑ ))(sgn)Class( b

i

ii xxx α

Page 16: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

16

– So if each xi is replaced by Φ(xi) in these procedures,

we will have to calculate a lot of dot products,

Φ(xi)⋅ Φ(xj)

– But in general, if the feature space F is high

dimensional, Φ(xi) ⋅ Φ(xj) will be expensive to

compute.

– Also Φ(x) can be expensive to compute

• Second trick:

– Suppose that there were some magic function,

k(xi, xj) = Φ(xi) ⋅ Φ(xj)

such that k is cheap to compute even though Φ(xi) ⋅

Φ(xj) is expensive to compute.

– Then we wouldn’t need to compute the dot product

directly; we’d just need to compute k during both the

training and testing phases.

– The good news is: such k functions exist! They are

called “kernel functions”, and come from the theory of

integral operators.

Page 17: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

17

Example: polynomial kernel:

Suppose x = (x1, x2) and y = (y1, y2).

2)(),( yxyx ⋅⋅⋅⋅====k

(((( ))))

)()(

22)()(

:Then

.,2,)(Let

2

2

2

1

2

1

2

2

21

2

1

2

2

21

2

1

2

221

2

1

yxyx

yx

x

,ky

y

x

x

y

yy

y

x

xx

x

xxxx

====⋅⋅⋅⋅====

⋅⋅⋅⋅

====

⋅⋅⋅⋅

====⋅⋅⋅⋅

====

ΦΦ

Φ

Recap of SVM algorithm

Given training set

S = {(x1, y1), (x2, y2), ..., (xm, ym) | (xi, yi)∈ ℜn ×{+1, -1}

1. Choose a map Φ: ℜn→ F, which maps xi to a higher

dimensional feature space. (Solves problem that X might

not be linearly separable in original space.)

2. Choose a cheap-to-compute kernel function

k(x,z)=Φ(x) ⋅ Φ(z)

(Solves problem that in high dimensional spaces, dot

products are very expensive to compute.)

3. Map all the xi’s to feature space F by computing Φ(xi).

Page 18: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

18

4. Apply quadratic programming procedure (using the kernel

function k) to find a hyperplane (w, b), such that

where the Φ(xi)’s are support vectors, the αi’s are

coefficients, and w0 is a bias, such that (w,b) is the

hyperplane maximizing the margin of S in F.

)( i

i

i xΦ∑∑∑∑==== αw

• Now, given a new instance, x, find the classification of x

by computing

+=

+Φ⋅Φ=

)),(sgn

)))()((sgn)(class

bk

b

i

ii

i

ii

xx

xxx

α

α

Page 19: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

19

Example:

Applying SVMs to text classification

(Dumais et al., 1998)

• Used Reuters collection of news stories.

• Question: Is a particular news story in the category

“grain” (i.e., about grain, grain prices, etc.)?

• Training examples: Vectors of features such as appearance

or frequency of key words. (Similar to our spam-

classification task.)

• Resulting SVM: weight vector defined in terms of support

vectors and coefficients, plus threshold.

Results

Precision: Correctly classified positives / Number of positive classifications

Recall: Correctly classified positives / All actual positives

Can vary decision threshold to produce higher precision or higher recall.

Generally want high values for both. SVM gives best results over

all these algorithms.

Page 20: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

20

Demo:

Text classification using SVM Light

Page 21: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

21

Data sets in Text Classification

• Training set

• Validation set

• Test set

• Notion of “Cross Validation”

• How would you do multi-class classification with SVMs?

Page 22: Classification Applet - Computer Action Teamweb.cecs.pdx.edu/~maier/cs510iri/MM-Lectures/Lecture9Week5aSVMs.pdfa photographic memory who, when presented with a new ... terms of dot

22

Spam Classification Data for Homework