sargur srihari...

22
Machine Learning Srihari Mixture Density Networks Sargur Srihari [email protected] 1

Upload: others

Post on 04-Aug-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Mixture Density Networks

Sargur [email protected]

1

Page 2: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Mixture Density Networks

•  Goal of supervised learning is to model the conditional distribution p(t|x)•  For many simple problems it is chosen to be Gaussian•  In regression p(t|x) is typically assumed to be Gaussian•  i.e., p(t|x)=N(t|y(x,w),b-1)

•  In practical ML it can be significantly non-Gaussian•  Particularly in inverse problems•  Gaussian assumption can lead to poor results

2

Page 3: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari Kinematics of a robot arm

•  Robot arm with two links

•  Inverse problem is a regression problem with •  two inputs:

•  desired location of arm (x1,x2)

•  two outputs: •  angles for links (θ1,θ2)

•  Has two solutions (elbow up, elbow down) 3

Forward problem: Find end effector position (x1,x2) given joint angles (θ1,θ2).

It has a unique solution

Inverse problem:Find joint angles (θ1,θ2) for a desired end effector (x1,x2) It has two solutions:Elbow-up and elbow-down

Page 4: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Forward and Inverse Problems

4

•  Forward problems correspond to causality in a physical system

•  Generally have a unique solution•  E.g., A a specific pattern of symptoms in the human body is

caused by a particular disease

• In ML we are typically interested in inverse problems•  E.g., predicting disease given the symptoms

•  If forward problem is a many-to-one mapping, •  E.g., Several diseases have the same symptoms•  Inverse has multiple solutions

•  Same symptoms caused by several diseases

Page 5: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari An Example: x is disease, y is symptom;

x is elbow position, y is effector position

5

http://otoro.net/ml/mixture/index.html

http://otoro.net/ml/mixture/inverse.html

http://otoro.net/ml/mixture/mixture.html

Forward Data and Least squares regression

Inverse Data and Least squares regression

Inverse Data and Mixture model

Forward Data and Mixture model

Page 6: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari A Mixture Density

•  Generative model with K components

•  Components can be Gaussian for continuous variables, Bernoulli for binary target variables, etc

•  Note that mixing coefficient π is dependent on x and sums to 1 for each x

•  So also mean and variance •  An example of a hetero-scedastic model

since variance is a function of the input vector x

6

p(t |x) = π

kk=1

K

∑ (x)Ν(t | µk(x),σ

k2(x))

Variance doesnot remain constant

Page 7: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Data Set for Forward and Inverse Problems

7

•  Least squares corresponds to•  Maximum likelihood under

a Gaussian assumption•  Leads to a poor result for

highly non-Gaussian inverse problem

•  Seek a general framework for modeling conditional probability distributions

•  Achieved by using a mixture model p(t|x)

Forward problem data set:x is sampled uniformly over (0,1) to give values {xn} Target tn obtained byfunctionxn+0.3sin(2πxn) Then add noise over (-0.1,0.1)

Red curve is result offitting a two-layerneural networkby minimizingsum-of-squarederror

Corresponding inverseproblem by reversingx and t

Very poor fit to data

Page 8: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Parameters of Mixture Model

•  Parameters of the mixture density:1.  Mixing coefficients πk(x) 2.  Means µk(x) 3.  Variances σk

2(x)

•  Governed by the outputs of a neural network•  With x as input

•  A single network predicts the parameters of all the component densities

8

Page 9: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari Mixture density network

9

•  Network represents general conditional densities p(t|x) by considering a parametric mixture model

•  It takes x as input and provides the parameters of the distribution as output•  In effect, it specifies the distribution of p(t|x) •  Takes x as input vector

Page 10: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari Training a Mixture Density Network

10 https://publications.aston.ac.uk/373/1/NCRG_94_004.pdf

zq and tq correspond to parameter vector and target vector corresponding to a single input xq

MDN combines the structure of a feedforward network with a mixture model

Software implementation of training returns: Error and the Derivative of the error

Page 11: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

No. of Outputs of Neural Network

•  Two-layer network with sigmoidal ( tanh) hidden units

•  No. of output units calculated as follows: •  If K components in mixture model then there are K mixing

coefficients πk(x) determined by activations akπ

•  K outputs akσ that determine kernel widths σk(x)

•  If Target t has L components then there are K x L outputs denoted akj

µ that determine components µkj(x) of kernel centres µk(x)

•  Then network will have (L+2)K outputs•  Instead of usual L outputs of a network, which simply predict

the conditional means of target variables 11

Page 12: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Outputs of Mixture Density Network

1.  Mixing coefficients •  must satisfy

•  Achieved using softmax outputs

2.  Variances•  Must satisfy•  Represented as exponentials of activations

3.  Means•  Real components represented directly by output activations

12

π

kk=1

K

∑ (x) = 1, 0≤ πk(x)≤1

πk(x) =

exp(akπ)

exp(alπ)

l=1

K

σk2(x)≥ 0

σk(x) = exp(a

kσ)

µ

kj(x) = a

kjµ

Page 13: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Error Function for Mixture Density Network

•  Can be set by maximum likelihood•  From distribution

•  Negative logarithm of Likelihood function is

13

p(t |x) = π

kk=1

K

∑ (x)Ν(t | µk(x),σ

k2(x))

E(w) =− ln

n=1

N

∑ πk

k=1

K

∑ (xn,w)Ν(t | µ

k(x

n,w),σ

k2(x

n,w))

⎧⎨⎪⎪

⎩⎪⎪

⎫⎬⎪⎪

⎭⎪⎪

This is what will be used to predict the output for a given input

Page 14: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Minimization of Error Function

•  Need to calculate derivatives of error E(w) wrt component

•  Can be evaluated provided we find suitable expressions for for derivatives of error wrt output unit activations•  They represent error signals for each pattern and output unit

•  Derivative terms are easily obtained due to summation of terms one for each data point

14

Page 15: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

View of mixing coefficients

•  Convenient to view mixing coefficients πk(x) as x-dependent prior probabilities

•  Corresponding posterior probabilities are

•  where Nnk denotes N(tn|µk(xn),σk2(xn))

15

γ k (t | x) =π kNnk

π lNnll=1

K

Page 16: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Derivatives with respect to Network Output Activations

1.  Mixing coefficients

2.  Component means

3.  Component variances

16

∂En

∂akπ = π k − γ k

∂En

∂aklµ = γ k

µkl − tlσ k2

& ' (

) * +

∂En

∂aklσ = −γ k

|| t −µk ||2

σ k2 −

1σ k

& ' (

) * +

Page 17: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Training Data for Mixture Density Network

•  Obtained easily from forward data by exchanging roles of x and t •  For different joint angles, the position of end effector

•  Note that we are using x as input and t as output after data exchange

17

Page 18: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Output of Mixture Density Network

18

Mixing Coefficientsπk(x) versus x

Means µk(x)

Contours ofConditionalprobability densityof target data

Data Set

While the outputs of the neural network (and hence the parameters) are necessarily single valued, The model is able to produce a conditional density that is unimodal for some values of x and trimodal for other values

Three components have to sum to Unity

Page 19: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Use of Mixture Density Network

•  Once mixture density network has been trained •  can predict conditional density function of the target data for given value of input vector

•  From this density can calculate more specific quantities of interest in applications

e.g., mean of the target data

Page 20: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Predicting value of output vector

•  Conditional distribution represents complete description of generator of data

•  From this density we can calculate the mean •  Which is the conditional average of target data

•  This is same as least squared solution and is limited value•  Average of two solutions is not a solution

•  Variance of density function about the conditional average

20

E[t | x] = tp(t | x)dt = π k (x)µk (x)k=1

K

∑∫

s2(x) = E || t − E t | x[ ] ||2 | x[ ]

= π kk =1

K

∑ (x) σ k2(x) + µk (x) − π l (x)µl (x)

l=1

K

∑2&

' (

) (

* + (

, (

This is expected value, Not Error!

Page 21: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Mode as the solution

•  Each of the modes of mixture density is a better solution than the single mean

•  Does not have a simple analytical solution•  Need numerical iteration

•  Simple alternative:•  Take mean of the most probable component

•  One with largest mixing coefficient for each value of x

21

Page 22: Sargur Srihari srihari@buffalocedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.6-MixDensityNetworks.pdfChap5.6-MixDensityNetworks.ppt Author: Sargur Srihari Created Date: 10/25/2018 9:51:02

Machine Learning Srihari

Example of Mixture Density Network

22

Mixing Coefficientsπk(x) versus x

Means µk(x)

ApproximateConditional mode(red points ofConditional density)

Contours ofConditionalProbability density

Data Set