overview of kernel methods prof. bennett math model of learning and discovery 2/27/05 based on...

24
Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini

Upload: holly-phelps

Post on 25-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Overview of Kernel Methods

Prof. Bennett

Math Model of Learning and Discovery 2/27/05

Based on Chapter 2 of

Shawe-Taylor and Cristianini

Outline

Overview

Ridge Regression

Kernel Ridge Regression

Other Kernels

Summary

Kernel Method

Two parts (+):Mapping into embedding or feature space defined by kernel.Learning algorithm for discovering linear patterns in that space.(Leaning algorithm must be regularized.)(Learning algorithm must work in dual space.)

Illustrate using linear regression.

Linear Regression

Given training data:

Construct linear function:

Creates pattern function:

1 1 2 2, , , , , , , , ,

points and labels

i

ni

S y y y y

R y R

i

i

x x x x

x

1

( ) , 'n

i ii

g w x

x w x w x

( , ) ( ) , 0f y y g y x x w x

1-d Regression

,w x

x

y

Predict Drug Absorption

Human intestinal cell line CACO-2

Predict permeability

718 descriptors generatedElectronic TAEShape/Property (PEST)Traditional

27 molecules with tested permeability

y R

718i Rx

28

Least Squares Approximation

Want

Define error

Minimize loss

( )g x y

( , ) ( )f y y g x x

1

2

1 1

( , ) ( , ) ( )

, ,

i ii

i i ii i

L g s L w S y g

l y g

x

x

Optimal Solution

Want:

Mathematical Model:

Optimality Condition:

Solution satisfies:

2min ( , ) 'L S w w y Xw y Xw y Xw

y Xw

( , )2 ' 2 ' 0

L S

w

X y X Xww

' 'X Xw X y

Solving nn equation is 0(n3)

Solution

Assume exists, then

Alternative Representation:

1'

X X

1' ' ' '

X Xw X y w X X X y

1 1 1

2

2

1

' ' ' ' ' '

' ' ' '

where ' ' , i ii

w X X X y X X X X X X X y

X X X X X y X α

α X X X X y w x

Is this a good assumption?

Ridge Regression

Inverse typically does not exist.

Use least norm solution for fixed

Regularized problem

Optimality Condition:

2 2min ( , )L S w w w y Xw

( , )2 2 ' 2 ' 0

L S

w

w X y X Xww

' 'n X X I w X y

0.

Requires 0(n3) operations

Ridge Regression (cont)

Inverse always exists for any

Alternative representation:

1' ' w X X I X y

0.

1

1

' ' ' '

' '

X X I w X y w X y X Xw

w X y Xw X α

1

1

'

'

where '

α y Xw

α y Xw y XX α

XX α α y

α G I y G XX

Solving ll equation is 0(l3)

Dual Ridge Regression

To predict new point:

Note need only compute G, the Gram Matrix

1

1

( ) , , '

where ,

i ii

i

g

x w x x x y G I z

z x x

' ,ij i jG G XX x x

Ridge Regression requires only inner products between data points

Efficiency

To compute w in primal ridge regression is 0(n3)

in primal ridge regression is 0(l3)

To predict new point x primal 0(n)

dual 0(nl)

1 1 1

( ) , ,n

i i i i j ji i j

g

x w x x x x x

Dual is better if n>>l

1

( ) ,n

i ii

g w

x w x x

Notes on Ridge Regression

“Regularization” is key to addressstability and regularization.

Regularization lets method work when n>>l.Dual more efficient when n>>l.Dual only requires inner products of data.

Linear Regression in Feature Space

Key Idea:

Map data to higher dimensional space (feature space) and perform linear regression in embedded space.

Embedding Map:: n NR F R N n x

Nonlinear Regression in Feature Space

1 2

2 2

2 21 2 1 3

,

,

( ) , , 2

( ) ( ),

2

F

a b

w a w b

a b ab

g

w a w b w w ab

x

x w

x

x x w

In primal representation:

Nonlinear Regression in Feature Space

1

1

( ) ( ),

( ), ( )

( , )

F

i ii

i ii

g

K

x x w

x x

x x

In dual representation:

Kernel Function

A kernel is a function K such that

There are many possible kernels.

Simplest is linear kernel.

, ( ), ( )

where is a mapping from input space

to feature space .

FK

F

x u x u

, ,K x u x u

Derivation of Kernel

2 2 2 21 2 1 2 1 2 1 2

2 2 2 21 1 2 2 1 2 1 2

2

1 1 2 2

2

( ), ( )

( , , 2 ), ( , , 2 )

2

,

u u u u v v v v

u v u v u u v v

u v u v

u v

u v

2Thus: ( , ) ,K u v u v

Ridge Regression in Feature Space

To predict new point:

To compute the Gram Matrix

1

1

( ( )) , ( ) ( ), ( ) '

where ( ), ( )

i ii

i

g

x w x x x y G I z

z x x

( ) ( ) ' ( ), ( ) ( , )ij i j i jG K G X X x x x x

Use kernel to compute inner product

Popular Kernels based on vectors

By Hilbert-Schmidt Kernels (Courant and Hilbert 1953)

( ), ( ) ( , )u v K u v for certain and K, e.g.

2

( ) ( , )

Degree polynomial ( , 1)

|| ||Radial Basis Function Machine exp

Two Layer Neural Network ( , )

d

u K u v

d u v

u v

sigmoid u v c

Kernels Intuition

Kernels encode the notion of similarity to be used for a specific applications.Document use cosine of “bags of text”.Gene sequences can used edit distance.

Similarity defines distance:

Trick is to get right encoding for domain.

2|| || ( ) '( ) , 2 , , u v u v u v u u u v v v

Important Points

Kernel method = linear method + embedding in feature

space.Kernel functions used to do embedding efficiently.Feature space is higher dimensional space so must regularize.Choose kernel appropriate to domain.

Kernel Ridge Regression

Simple to derive kernel method

Works great in practice with

some finessing.

Next time:

Practical issues.

More standard dual derivation.