linear classification and svm - hcii lab learning/main/notes/machine... · linear classification...

48
Linear Classification and SVM Dr. Xin Zhang Email: [email protected]

Upload: duonghuong

Post on 16-Mar-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear Classification and SVM

Dr. Xin ZhangEmail: [email protected]

Page 2: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

What is “linear” classification?• Classification is intrinsically non-linear

– It puts non-identical things in the same class, so a difference in the input vector sometimes causes zero change in the answer (what does this show?)

• “Linear classification” means that the part that adapts is linear– The adaptive part is followed by a fixed non-

linearity. – It may also be preceded by a fixed non-linearity (e.g.

nonlinear basis functions).))((,)( 0 xxwx yfDecisionwy T

fixed non-linear function

adaptive linear function

Page 3: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Representing the target values for classification

• If there are only two classes, we typically use a single real valued output that has target values of 1 for the “positive” class and 0 (or sometimes -1) for the other class– For probabilistic class labels the target value can

then be the probability of the positive class and the output of the model can also represent the probability the model gives to the positive class.

• If there are N classes we often use a vector of N target values containing a single 1 for the correct class and zeros elsewhere.– For probabilistic labels we can then use a vector of

class probabilities as the target vector.

Page 4: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Three approaches to classification

• Use discriminant functions directly without probabilities:– Convert the input vector into one or more real

values so that a simple operation (like threshholding) can be applied to get the class.• The real values should be chosen to maximize the useable

information about the class label that is in the real value.• Infer conditional class probabilities:

– Compute the conditional probability of each class.• Then make a decision that minimizes some loss function

• Compare the probability of the input under separate, class-specific, generative models.– E.g. fit a multivariate Gaussian to the input vectors

of each class and see which Gaussian makes a test data vector most probable. (Is this the best bet?)

)|( xkCclassp

Page 5: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

The planar decision surface in data-space for the simple linear discriminant function:

00 wT xw

Page 6: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Discriminant functions for N>2 classes

• One possibility is to use N two-way discriminant functions.– Each function discriminates one class from the rest.

• Another possibility is to use N(N-1)/2 two-way discriminant functions– Each function discriminates between two particular

classes.• Both these methods have problems

Page 7: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Problems with multi-class discriminant functions

More than one good answer

Two-way preferences need not be transitive!

Page 8: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

A simple solution

• Use N discriminant functions, and pick the max.– This is guaranteed to

give consistent and convex decision regions if y is linear.

BAjBAk

BjBkAjAk

yythatpositiveforimplies

yyandyy

xxxx

xxxx

)1()1()(

)()()()(

...,, kji yyy

Page 9: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Using “least squares” for classification

• This is not the right thing to do and it doesn’t work as well as better methods, but it is easy:– It reduces classification to least squares

regression.– We already know how to do regression. We can

just solve for the optimal weights with some matrix algebra.

• We use targets that are equal to the conditional probability of the class given the input.– When there are more than two classes, we treat

each class as a separate problem (we cannot get away with this if we use the “max” decision function).

Page 10: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Problems with using least squares for classification

If the right answer is 1 and the model says 1.5, it loses, so it changes the boundary to avoid being “too correct”

logistic regression

least squares regression

Page 11: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Another example where least squares regression gives poor decision surfaces

Page 12: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Fisher’s linear discriminant

• A simple linear discriminant function is a projection of the data down to 1-D.– So choose the projection that gives the best

separation of the classes. What do we mean by “best separation”?

• An obvious direction to choose is the direction of the line joining the class means.– But if the main direction of variance in each class

is not orthogonal to this line, this will not give good separation (see the next figure).

• Fisher’s method chooses the direction that maximizes the ratio of between class variance to within class variance.– This is the direction in which the projected points

contain the most information about class membership (under Gaussian assumptions)

Page 13: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

A picture showing the advantage of Fisher’s linear discriminant.

When projected onto the line joining the class means, the classes are not well separated.

Fisher chooses a direction that makes the projected classes much tighter, even though their projected means are less far apart.

Page 14: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Math of Fisher’s linear discriminants

• What linear transformation is best for discrimination?

• The projection onto the vector separating the class means seems sensible:

• But we also want small variance within each class:

• Fisher’s objective function is:

xwTy

12 mmw

)(

)(

222

121

2

1

mys

mys

Cnn

Cnn

22

21

212 )()(

ssmmJ

wbetween

within

Page 15: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

)(:

)()()()(

)()(

)()(

121

2211

1212

22

21

212

21

mmSw

mxmxmxmxS

mmmmS

wSwwSww

W

Cn

Tnn

Cn

TnnW

TB

WT

BT

solutionOptimal

ssmmJ

More math of Fisher’s linear discriminants

Page 16: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Perceptrons

• “Perceptrons” describes a whole family of learning machines, but the standard type consisted of a layer of fixed non-linear basis functions followed by a simple linear discriminant function.– They were introduced in the late 1950’s and they had a

simple online learning procedure.– Grand claims were made about their abilities. This led to lots

of controversy. – Researchers in symbolic AI emphasized their limitations (as

part of an ideological campaign against real numbers, probabilities, and learning)

• Support Vector Machines are just perceptrons with a clever way of choosing the non-adaptive, non-linear basis functions and a better learning procedure. – They have all the same limitations as perceptrons in what

types of function they can learn.• But people seem to have forgotten this.

Page 17: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

w x + b

=0

w x + b<0

w x + b>0

Page 18: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

Page 19: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

Page 20: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

Any of these would be fine..

..but which is best?

Page 21: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

Misclassified to +1 class

Page 22: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

f x

yest

f(x,w,b) = sign(w x + b)

Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Classifier Marginf x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Page 23: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Maximum Margin

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

The maximum margin linear classifier is the linear classifier with the, um, maximum margin.

This is the simplest kind of SVM (Called an LSVM)

Linear SVM

Support Vectors are those datapoints that the margin pushes up against

1. Maximizing the margin is good according to intuition and PAC theory

2. Implies that only support vectors are important; other training examples are ignorable.

3. Empirically it works very very well.

Page 24: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear SVM Mathematically

What we know:• w . x+ + b = +1 • w . x- + b = -1 • w . (x+-x-) = 2

“Predict Class =

+1”

zone

“Predict Class =

-1”

zonewx+b=1

wx+b=0

wx+b=-1

X-

x+

wwwxxM 2)(

M=Margin Width

Page 25: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear SVM Mathematically Goal: 1) Correctly classify all training data if yi = +1 if yi = -1 for all i 2) Maximize the Margin same as minimize

We can formulate a Quadratic Optimization Problem and solve for w and b

Minimize subject to

wM 2

www t

21)(

1 bwx i1 bwx i

1)( bwxy ii

1)( bwxy ii

i

wwt

21

Page 26: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Solving the Optimization Problem

Need to optimize a quadratic function subject to linear constraints.

Quadratic optimization problems are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them.

The solution involves constructing a dual problem where a Lagrange multiplier αi is associated with every constraint in the primary problem:

Find w and b such thatΦ(w) =½ wTw is minimized; and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1

Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjxi

Txj is maximized and (1) Σαiyi = 0(2) αi ≥ 0 for all αi

Page 27: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

The Optimization Problem Solution The solution has the form:

Each non-zero αi indicates that corresponding xi is a support vector.

Then the classifying function will have the form:

Notice that it relies on an inner product between the test point x and the support vectors xi – we will return to this later.

Also keep in mind that solving the optimization problem involved computing the inner products xiTxj between all pairs of training points.

w =Σαiyixi b= yk- wTxk for any xk such that αk 0

f(x) = ΣαiyixiTx + b

Page 28: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Dataset with noise

Hard Margin: So far we require all data points be classified correctly

- No training error

What if the training set is noisy?

- Solution 1: use very powerful kernels

denotes +1

denotes -1

OVERFITTING!

Page 29: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Slack variables ξi can be added to allow misclassification of difficult or noisy examples.

wx+b=1

wx+b=0

wx+b=-17

11 2

Soft Margin Classification

What should our quadratic optimization criterion be?

Minimize

R

kkεC

1.

21 ww

Page 30: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Hard Margin v.s. Soft Margin The old formulation:

The new formulation incorporating slack variables:

Parameter C can be viewed as a way to control overfitting.

Find w and b such thatΦ(w) =½ wTw is minimized and for all {(xi ,yi)}yi (wTxi + b) ≥ 1

Find w and b such thatΦ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)}yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i

Page 31: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Linear SVMs: Overview The classifier is a separating hyperplane. Most “important” training points are support vectors; they

define the hyperplane. Quadratic optimization algorithms can identify which training

points xi are support vectors with non-zero Lagrangian multipliers αi.

Both in the dual formulation of the problem and in the solution training points appear only inside dot products:

Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjxi

Txj is maximized and (1) Σαiyi = 0(2) 0 ≤ αi ≤ C for all αi

f(x) = ΣαiyixiTx + b

Page 32: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Non-linear SVMs Datasets that are linearly separable with some noise

work out great:

But what are we going to do if the dataset is just too hard?

How about… mapping data to a higher-dimensional space:

0 x

0 x

0 x

x2

Page 33: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Non-linear SVMs: Feature spaces General idea: the original input space can always be

mapped to some higher-dimensional feature space where the training set is separable:

Φ: x → φ(x)

Page 34: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

The “Kernel Trick” The linear classifier relies on dot product between vectors K(xi,xj)=xi

Txj

If every data point is mapped into high-dimensional space via some transformation Φ: x → φ(x), the dot product becomes:

K(xi,xj)= φ(xi) Tφ(xj)

A kernel function is some function that corresponds to an inner product in some expanded feature space.

Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xi

Txj)2,

Need to show that K(xi,xj)= φ(xi) Tφ(xj):

K(xi,xj)=(1 + xiTxj)2

,

= 1+ xi12xj1

2 + 2 xi1xj1 xi2xj2+ xi2

2xj22 + 2xi1xj1 + 2xi2xj2

= [1 xi12 √2 xi1xi2 xi2

2 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj2

2 √2xj1 √2xj2] = φ(xi)

Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x2

2 √2x1 √2x2]

Page 35: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

What Functions are Kernels? For some functions K(xi,xj) checking that

K(xi,xj)= φ(xi) Tφ(xj) can be cumbersome.

Mercer’s theorem: Every semi-positive definite symmetric function is a kernel

Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix:

K(x1,x1) K(x1,x2) K(x1,x3) … K(x1,xN)

K(x2,x1) K(x2,x2) K(x2,x3) K(x2,xN)

… … … … … K(xN,x1) K(xN,x2) K(xN,x3) … K(xN,xN)

K=

Page 36: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Examples of Kernel Functions Linear: K(xi,xj)= xi Txj

Polynomial of power p: K(xi,xj)= (1+ xi Txj)p

Gaussian (radial-basis function network):

Sigmoid: K(xi,xj)= tanh(β0xi Txj + β1)

)2

exp(),( 2

2

ji

ji

xxxx

K

Page 37: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Non-linear SVMs Mathematically Dual problem formulation:

The solution is:

Optimization techniques for finding αi’s remain the same!

Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and (1) Σαiyi = 0(2) αi ≥ 0 for all αi

f(x) = ΣαiyiK(xi, xj)+ b

Page 38: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

SVM locates a separating hyperplane in the feature space and classify points in that space

It does not need to represent the space explicitly, simply by defining a kernel function

The kernel function plays the role of the dot product in the feature space.

Nonlinear SVM - Overview

Page 39: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Properties of SVM

• Flexibility in choosing a similarity function• Sparseness of solution when dealing with large

data sets - only support vectors are used to specify the separating

hyperplane • Ability to handle large feature spaces - complexity does not depend on the dimensionality of the

feature space• Overfitting can be controlled by soft margin

approach• Nice math property: a simple convex optimization

problem which is guaranteed to converge to a single global solution

• Feature Selection

Page 40: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

SVM Applications

• SVM has been used successfully in many real-world problems

- text (and hypertext) categorization - image classification - bioinformatics (Protein classification, Cancer classification) - hand-written character recognition

Page 41: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Application 1: Cancer Classification

• High Dimensional - p>1000; n<100

• Imbalanced - less positive samples

• Many irrelevant features• Noisy

GenesPatients g-1 g-2 …… g-p

P-1p-2

…….

p-n

NnxxkxxK

),(],[

FEATURE SELECTION

In the linear case,wi2 gives the ranking of dim i

SVM is sensitive to noisy (mis-labeled) data

Page 42: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Weakness of SVM

• It is sensitive to noise - A relatively small number of mislabeled examples can

dramatically decrease the performance

• It only considers two classes - how to do multi-class classification with SVM? - Answer: 1) with output arity m, learn m SVM’s

– SVM 1 learns “Output==1” vs “Output != 1”– SVM 2 learns “Output==2” vs “Output != 2”– :– SVM m learns “Output==m” vs “Output != m”

2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

Page 43: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Application 2: Text Categorization

• Task: The classification of natural text (or hypertext) documents into a fixed number of predefined categories based on their content.

- email filtering, web searching, sorting documents by topic, etc..

• A document can be assigned to more than one category, so this can be viewed as a series of binary classification problems, one for each category

Page 44: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Representation of TextIR’s vector space model (aka bag-of-words representation) A doc is represented by a vector indexed by a pre-fixed

set or dictionary of terms Values of an entry can be binary or weights

Normalization, stop words, word stems Doc x => φ(x)

Page 45: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Text Categorization using SVM

• The distance between two documents is φ(x)·φ(z) • K(x,z) = 〈φ(x)·φ(z) is a valid kernel, SVM can be used

with K(x,z) for discrimination.

• Why SVM? -High dimensional input space -Few irrelevant features (dense concept) -Sparse document vectors (sparse instances) -Text categorization problems are linearly separable

Page 46: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Some Issues• Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate

similarity measures

• Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different

classifications - In the absence of reliable criteria, applications rely on the use

of a validation set or cross-validation to set such parameters.

• Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters

are tested

Page 47: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Additional Resources

• An excellent tutorial on VC-dimension and Support Vector Machines:

C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998.

• The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley-

Interscience; 1998

http://www.kernel-machines.org/

Page 48: Linear Classification and SVM - HCII Lab Learning/main/notes/Machine... · Linear Classification and SVM Dr. Xin Zhang ... – Researchers in symbolic AI emphasized their ... Quadratic

Reference

• Support Vector Machine Classification of Microarray Gene Expression Data, Michael P. S. Brown William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, Jr., David Haussler

• www.cs.utexas.edu/users/mooney/cs391L/svm.ppt • Text categorization with Support Vector

Machines:learning with many relevant features

T. Joachims, ECML - 98