580.691 learning theory reza shadmehr maximum likelihood integration of sensory modalities

28
580.691 Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Upload: ingo

Post on 21-Mar-2016

32 views

Category:

Documents


3 download

DESCRIPTION

580.691 Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities. We showed that linear regression, steepest decent algorithm, and LMS all minimize the cost function. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

580.691 Learning Theory

Reza Shadmehr

Maximum likelihood

Integration of sensory modalities

Page 2: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

We showed that linear regression, steepest decent algorithm, and LMS all minimize the cost function

2

1

NTT

n nn

J y

w x w y Xw y Xw

This is just one possible cost function. What is the justification for this cost function? Today we will see that this cost function gives rise to the maximum likelihood estimate if the data is normally distributed.

Page 3: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Expected value and variance of scalar random variables

2 2

2 2 2 2

2 2 2

2 2

var

2 2

2

E x x

x E x E x E x x

E x xx x E x E x x E x

E x x x

E x x

Page 4: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

*( ) * ( )

(1) (1) (2) (2) ( ) ( )

( ) *( )

( ) ( )

, , , , , ,

n T n

n n

n n

n T n

y

D y y y

y y

yX

w x

x x x

w xy w ε

The “true” underlying process

What we measured

20,N

Our model of the process

2,N Iε 0

Statistical view of regression

• Suppose the outputs y were actually produced by the process:

Given a constant X, the underlying process would give us different y every time we observe it. Given each “batch”, we fit our parameters. What is the probability of observing the particular y in trial i?

( ) ( ) , , ?i ip y x w

Page 5: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Probabilistic view of linear regression

• Linear regression expresses the random variable y(n) in terms of the input-independent variation around the mean

( ) ( )

( ) ( ) ( ) ( )ˆ

i T i

i i T i i

y

E y y

w x

x w x

x

y

( )T nw x

Let us assume: 20,N

Normal distribution Mean zero

variance

Then outputs y given x are:

( ) ( ) 2

2( ) ( ) ( ) ( )1/ 2 22

,

1 1, , exp22

i T i

i i i T i

y N

p y y

w x

x w w x

Page 6: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

2

2( ) ( ) ( ) ( )1/ 2 22

0,

1 1, , exp22

i i i T i

N

p y y

x w w x

1 2 3 4 5 6

0.2

0.4

0.6

0.8

Probabilistic view of linear regression

• As variance, i.e., spread, of the residual increases, our confidence about our model’s guess decreases.

0.51

( ) ( ) , ,i ip y x w

( )iy

Page 7: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Probabilistic view of linear regression

• Example: suppose the underlying process was:

• Given some data points, we estimate w and also guess the variance of the noise, we could compute probability of each y that we observed.

• In this example,

y

20, 1N

*( ) * ( )i T iy w x

-1.5 -1 -0.5 0 0.5 1 1.50

2

4

6 T xw g

x

We want to find a set of parameters that maximize P for all the data.

0.10.20.30.4

-1.5

1.5

x0

7

y

( ) ( ) , ,i ip y x w

Page 8: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Maximum likelihood estimation

• We view the outputs y(n) as random variables that were generated by a probabilistic process that had some distribution with unknown parameters (e.g., mean and variance). The “best” guess for is one that maximizes the joint probability that the observed data came from that distribution.

(1) ( ) (1) ( )

( )

1

, ,

arg max

n n

ni

MLi

P y y P y y P y y

P y y

θ

θ θ θ

θ θ

Page 9: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Maximum likelihood estimation: uniform distribution

• Suppose that n numbers y(i) were drawn from a distribution and we need to estimate the parameters of that distribution.

• Suppose that the distribution was uniform.

( ) ( )

( ) ( )

1

( )

1 for 0, , 0 otherwise.

1 if max , 0 otherwise

arg max

max

i i

ni i

ni

ML ai

ML

P y y a y aa

L a P y y a a ya

a L a

a y

Likelihood that the data came from a model with our specific parameter value

Page 10: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Maximum likelihood estimation: exponential distribution

( ) ( )

11

( ) ( ) ( )

1 1 1

( )2

1

( )2

1

( )

1

1 1exp 0 0

1 1exp

log

1 1 1 1log log log

1 0

1

1

n ni i

nii

n n ni n i i

ni i in

i

in

i

in

iML

i

p y a y a ya a

L a P y y a yaa

l a L a

y a y n a ya a aa

dl n yda a a

n ya a

a yn

Log-likelihood

1log( )d xdx x

Page 11: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Maximum likelihood estimation: Normal distribution

2( ) ( )22

( )

1

2( )221 1

22 ( )2

1 1

2( )2

1

1 1, exp22

, ,

1 1, log , log22

1log 22

1log 2 log2

i i

ni

in n

i

i in n

i

i in

i

i

P y y y

L P y y

l L y

y

n n y

Now we see that if is a constant, the log-likelihood is proportional to our cost function (the sum of squared errors!)

Page 12: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Maximum likelihood estimation: Normal distribution

2( )2

1

( ) ( )2 2

1 1

( )

1

( )

1

2( )3

11/ 2

2( )

1

1, log 2 log2

1 12 ( 1) 02

0

1

1 0

1

ni

in n

i i

i in

i

in

iML

in

i

i

ni

MLi

l n n y

dl y yd

n y

yn

dl n yd

yn

1log( )d xdx x

Page 13: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

(1) ( ) (1) ( ) (1) (1) ( ) ( )

2( ) ( )1/ 2 221

2( ) ( )/ 2 22 1

, , , , , , , , , ,

1 1exp22

1 1exp22

n n n n

ni T i

i

ni T i

ni

P y y P y y P y y

y

y

x x w x w x w

w x

w x

Probabilistic view of linear regression

• If we assume that y(i) are independently and identically distributed (I.I.D.), conditional on x(i), then the joint conditional distribution of the data y is obtained by taking the product of the individual conditional probabilities:

Given our model, we can assign a probability to our observation.

We want to find parameters that maximize the probability that we will observe data like the one that we were given.

Page 14: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

-1.5 -1 -0.5 0 0.5 1 1.50

2

4

6

8

-1.5 -1 -0.5 0 0.5 1 1.50

2

4

6

8

Probabilistic view of linear regression

• Given some data D, and two models: (w1,) and (w2,), the better model has the larger joint probability for the actually observed data.

(1) ( ) (1) ( ) (1) ( ) (1) ( )1 2, , , , , , , , , , , ,n n n nP y y P y y x x w x x w

Page 15: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

-1.5 -1 -0.5 0 0.5 1 1.50

1

2

3

4

5

6

7

0.6 0.8 1 1.2 1.4

0

1 10-38

2 10-38

3 10-38

4 10-38

5 10-38

6 10-38

7 10 -38

Probabilistic view of linear regression

• Given some data D, and two models: (w,1) and (w,2), the better model has the larger joint probability for the actually observed data.

(1) ( ) (1) ( ), , , , , ,n nP y y x x w

T xw g

The underlying process here was generated with a =1, our model was second order, and our joint probability on this data set happened to peak near =1.

Page 16: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

-1.5 -1 -0.5 0 0.5 1 1.5

0

2

4

6

-1.5 -1 -0.5 0 0.5 1 1.5

0

2

4

6

8

0.6 0.8 1 1.2 1.40

5 10-41

1 10-40

1.510-40

2 10-40

2.510-40

3 10-40

0.6 0.8 1 1.2 1.40

1 10-37

2 10-37

3 10-37

4 10-37

5 10-37

6 10-37

-1.5 -1 -0.5 0 0.5 1 1.50

1

2

3

4

5

6

7

The same underlying process will generate different D on each run, resulting in different estimates of w and , despite the fact that the underlying process did not change.

0.6 0.8 1 1.2 1.40

1 10-38

2 10-38

3 10-38

4 10-38

5 10-386 10

-38

7 10-38 (1) ( ) (1) ( ), , , , , ,n nP y y x x w

1D

2D

3D

Page 17: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

(1) (2) ( ) (1) (2) ( )

( ) ( )

1

2( ) ( )/ 2 22 1

, , , , , , , , ,

, ,

1 1exp22

n n

ni i

in

i T in

i

L P y y y

P y y

y

w x x x w

x w

w x

Likelihood of our model

• Given some observed data:

• and model structure:

• Try to find the parameters w and that maximize the joint probability over the observed data:

(1) (1) (2) (2) ( ) ( ), , , , , ,n nD y y y x x x

(1) (2) ( ) (1) (2) ( ), , , , , , , ,n nP y y y x x x w

Likelihood that the data came from a model with our specific parameter values

( ) *( )

( ) 2 0,

i i

T i

y y

N

w x

Page 18: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

( ) ( )

1

2( ) ( ) ( ) ( )1/ 2 221 1

2 1/ 2( ) ( ) 22

1 1

, log , log , ,

1 1log , , log exp22

1 log 22

ni i

in n

i i i T i

i i

n ni T i

i i

l L P y y

p y y

y

w w x w

x w w x

w x

Maximizing the likelihood

• It’s easier to maximize the log of the likelihood function.Log-likelihood

Finding w to maximize the likelihood is equivalent to finding w so to minimize loss function:

2( ) ( ) ( ) ( )

1ˆ,

ni i i T i

iLoss y y y

w x

Page 19: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

2 1/ 2( ) ( ) 22

1 1

1/ 222

1

2

2

2

1

1, log 22

1 log 22

12

12

1 2 2 02

n ni T i

i in

T

i

T T T T T T

T T T T T T T

T T

T T

T TML

l y

X X

X X X X

X X X X

dl X X Xd

X X X

X X X

w w x

y w y w

y y y w w y w w

y y w y w y w w

y ww

w y

w y

Finding weights that maximize the likelihoodLog-likelihood

( ) ( ) ( )

20,

i T i iy

N

w x

Above is the ML estimate of w, given model:

(all remaining terms are scalars)

(nxm)

(mx1)

Page 20: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

1/ 222

1

1/ 22

1

31

2

2

1, log 22

1 log 2 log2

1 1 0

1 0

1

nT

in

T

in

T

i

T

TML

l X X

X X

dl X Xd

X X n

X Xn

w y w y w

y w y w

y w y w

y w y w

y w y w

Finding the noise variance that maximizes the likelihood

( ) ( ) ( )

20,

i T i iy

N

w x

Above is the ML estimate of 2, given model:

1log( )d xdx x

Page 21: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

The hiking in the woods problem: combining information from various sources

x

ay by

We have gone on a hiking trip and taken with us two GPS devices, one from a European manufacturer, and the other from a US manufacturer. These devices use different satellites for positioning. Our objective is to figure out how to combine the information from the two sensors.

2x2

2x2

1

4

0 0,

0

, var

1 1, exp2(2 )

a

b

a

b

T

T

RC N R R

R

IC

I

N C C C R

p N C R C R CR

yy

y

y x ε ε

y x x

y x x y x y x

(a 4x1 vector)

Likelihood function

We want to find the position x that maximizes this likelihood.

Page 22: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

1

1 1

11 1

11 1 1 1

1

11 1 1 1

11 1 1

1 1ln 2ln(2 ) ln2 2

ln

ˆ

0

0

ˆ

var

ˆvar var

T

T T

T TML

a Ta b

b

ML a b a a b b

TT T T TML

p R C R C

d p C R C R Cd

C R C C R

RR C R R R

R

R R R R

R

C R C C R R C C R C

C

y x y x y x

y x y xx

x y

x y y

y

x y

11

11 1

T

a b

R C

R R

Our most likely location is one that weighs the reading from each device by the inverse of the device’s probability covariance. In other words, we should discount the reading from each device according to the inverse of each device’s uncertainty.

If we stay still and do not move, the variance in our readings is simply due to noise in the devices.

By combining the information from the two devices, the variance of our estimate is less than the variance of each device.

Page 23: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

ML estimate

Page 24: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Marc Ernst and Marty Banks (2002) were first to demonstrate that when our brain makes a decision about a physical property of an object, it does so by combining various sensory information about that object in a way that is consistent with maximum likelihood state estimation.

Ernst and Banks began by considering a hypothetical situation in which one has to estimate the height of an object. Suppose that you use your index and thumb to hold an object. Your haptic system and your visual system report its height.

2 2

2 2

2 2 2 2

2 2

0, ,0;0,

1 1

1 1ˆ1 1 1 1

1ˆvar1 1

h v

T Th v

h vML h v

h v h v

MLh v

x N R R

y y

E x y y

x

y c ε ε

y c

If the noise in the two sensors is equal, then the weights that you apply to the sensors are equal as well. This case is illustrated in the left column of next figure. On the other hand, if the noise is larger for proprioception, your uncertainty is greater for that sensor and so you apply a smaller weight to its reading (right column of next fig).

Page 25: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Equal uncertainty in vision and prop. More uncertain of proprioception

Page 26: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Measuring the noise in a biological sensor

If one was to ask you to report the height of the object, of course you would not report your belief as a probability distribution. To estimate this distribution, Ernst and Banks acquired a psychometric function, shown in the lower part of the graph. To acquire this function, they provided their subjects a standard object of height 5.5cm. They then presented a second object of variable length and asked whether it was taller than the first object. If the subject represented the height of the standard object with a maximum likelihood estimate, then the probability of classifying the second object as being taller is simply the cumulative probability distribution. This is called a psychometric function. The point of subject equality (PSE) is the height at which the probability function is at 0.5.

2

21

22

2 1

2

22 1

0

0

2 2

2

,

,

ˆ

ˆ , 2

ˆPr Pr 0 ; , 2

2( )

1; , ; , 12 2

ˆPr 0 1 0; , 2

h

h

h

h

x

x

h

y N

y N

y y

N

y y N x dx

erf x e dx

xcdf x N t dt erf

cdf

2 4 6 8 10 12

0.1

0.2

0.3

0.4

-2 2 4 6 8

0.05

0.1

0.15

0.2

0.25

-3 -2 -1 1 2 3

0.2

0.4

0.6

0.8

1

1y 2y

ˆp ˆPr 0

ˆPr 0

2

2

2

1

2

3

h

h

h

( )p y

Page 27: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

2 24h v

The authors estimated that the noise in the haptic sense was four times larger than the noise in the visual sense.

This implies that in integrating visual and haptic information about an object, the brain should ‘weigh’ the visual information 4 times are much as haptic information.

To test for this, subjects were presented with a standard object for which the haptic information indicated a height of and visual information indicated a height of

Subjects would assign a weight of around 0.8 to the visual information and around 0.2 to the haptic information. To estimate these weights, they presented a second object (for which the haptic and visual information agreed) and ask which one was taller.

1

1

Page 28: 580.691  Learning Theory Reza Shadmehr Maximum likelihood Integration of sensory modalities

Summary

*( ) * ( )

( ) *( )

(1) (1) (2) (2) ( ) ( )

( ) ( )

, , , , , ,

i T i

i i

n n

i T i

y

y y

D y y y

y

w x

x x x

w x

The “true” underlying process

What we measured 20,N

Our model of the process

( ) ( )

11

2

, log , ,

1

ni i

i

T TML

TML

l P y y

X X X

X Xn

w x w

w y

y w y w

ML estimate of model parameters, given X: