machine learning - waseda university lecture 1:...

35
Machine Learning - Waseda University Lecture 1: Introduction AD June 2011 AD () June 2011 1 / 35

Upload: others

Post on 22-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Machine Learning - Waseda UniversityLecture 1: Introduction

AD

June 2011

AD () June 2011 1 / 35

Page 2: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Acknowledgments

Thanks to

Nando de Freitas

Kevin Murphy

AD () June 2011 2 / 35

Page 3: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Administrivia & Announcement

Webpage: http://www.cs.ubc.ca/~arnaud/waseda.html

I will be here every Tuesday.

Matlab resources:http://www.cs.ubc.ca/~mitchell/matlabResources.html

AD () June 2011 3 / 35

Page 4: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Pre-requisites

Maths: multivariate calculs, linear algebra, probability.

CS: programming skills, knowledge of data structures and algorithmshelpful but not crucial.

Prior exposure to statistics/machine learning is highly desirable butyou will be ok as long as your math is good.

I will discuss Bayesian statistics at length.

AD () June 2011 4 / 35

Page 5: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Textbook

C.M. Bishop, Pattern Recognition and Machine Learning, Springer,2006.

Check the webpagehttp://research.microsoft.com/~cmbishop/PRML/

Each week, there will be required reading, optional reading andbackground reading.

AD () June 2011 5 / 35

Page 6: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

What is Machine Learning?

“Learning denotes changes in the system that are adaptive in thesense that they enable the system to do the task or tasks drawn fromthe same population more effi ciently and more effectively the nexttime”—Herb Simon.

Machine Learning is an interdisciplinary field at the intersection ofStatistics, CS and EE etc.

At the beginning, Machine Learning was fairly heuristic but it has nowevolved and become -in my opinion- Applied Statistics with a CSflavour.

Aim of this course: introducing basic models, concepts andalgorithms.

AD () June 2011 6 / 35

Page 7: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Main Learning Tasks

Supervised learning: given a training set of N input-output pairs{xn, tn} ∈ X × T , construct a function f : X → T to predict theoutput t̂ = f (x) associated to a new input x .- Each input xn is a p-dimensional feature vector (covariates,explanatory variables).- Each output tn is a target variables (response).

Regression corresponds to T =Rd .

Classification corresponds to T = {1, ...,K}.Aim is to produce the correct output given a new input.

AD () June 2011 7 / 35

Page 8: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Polynomial regression

Figure: Polynomial regression where x ∈ R and t ∈ R

AD () June 2011 8 / 35

Page 9: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Linear regression

Figure: Linear least square fitting for x ∈ R2 and t ∈ R. We seek the linearfunction of x that minimizes the sum of square residuals for y .

AD () June 2011 9 / 35

Page 10: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Piecewise linear regression

Figure: Piecewise linear regression function

AD () June 2011 10 / 35

Page 11: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Figure: Linear classifier where xn ∈ R2 and tn ∈ {0, 1}

AD () June 2011 11 / 35

Page 12: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Handwritten digit recognition

Figure: Examples of handwritten digits from US postal employes

AD () June 2011 12 / 35

Page 13: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Figure: Can you pick out the tufas?

AD () June 2011 13 / 35

Page 14: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Practical examples of classification

Email spam filtering (feature vector = “bag of words”)

Webpage classification

Detecting credit card fraud

Credit scoring

Face detection in images

AD () June 2011 14 / 35

Page 15: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Unsupervised learning: we are given training data {xn} .Aim is to produce a model or build useful representations for xmodeling the distribution of the data,clustering,data association,dimensionality reduction,structure learning.

AD () June 2011 15 / 35

Page 16: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Hard and Soft Clustering

Figure: Data (left), hard clustering (middle), soft clustering (right)

AD () June 2011 16 / 35

Page 17: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Finite Mixture Models

Finite Mixture of Gaussians

f (x | θ) =k

∑i=1piN

(x ; µi , σ

2i

)

where θ ={

µi , σ2i , pi

}i=1,...,k is estimated from some data (x1, ..., xn) .

How to estimate the parameters?

How to estimate the number of components k?

AD () June 2011 17 / 35

Page 18: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Modeling Dependent Data: Hidden Markov Models

In this case, we have

xn | xn−1 ∼ f ( ·| xn−1) ,yn | xn ∼ g ( ·| xn) .

For example, stochastic volatility model

xn = αxn−1 + σvn where vn ∼ N (0, 1)yn = β exp (xn/2)wn where wn ∼ N (0, 1)

where the process (yn) is observed but (xn, α, σ, β) are unknown.

AD () June 2011 18 / 35

Page 19: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Tracking Application

AD () June 2011 19 / 35

Page 20: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Hierarchical Clustering

Figure: Dendogram from agglomerative hierarchical clustering with averagelinkage to the human microarray data

AD () June 2011 20 / 35

Page 21: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Dimensionality reduction

Figure: Simulated data in three classes, near the surface of a half-sphere

AD () June 2011 21 / 35

Page 22: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Figure: The best rank-two linear approximation to the half-sphere data. The rightpanel shows the projected points with coordinates given by the first to principalcomponents of the data

AD () June 2011 22 / 35

Page 23: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Manifold learning

Figure: Data lying on a low-dimensional manifold

AD () June 2011 23 / 35

Page 24: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Structure learning

Figure: Graphical model

AD () June 2011 24 / 35

Page 25: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Ockham’s razor

How can we predict the test set if it may be arbitrarily different fromthe training set?We need to make an assumption that the future will be like thepresent in statistical termsAny hypothesis that agrees with all is as good as any other, but wegenerally prefer “simpler”ones.

AD () June 2011 25 / 35

Page 26: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Active learning

Active learning is a principled way of integrating decision theory withtraditional statistical methods for learning models from data

In active learning, the machine can query the environment. That is, itcan ask questions.

Decision theory leads to optimal strategies for choosing when andwhat questions to ask in order to gather the best possible data.

“Good”data is often better than a lot of data

AD () June 2011 26 / 35

Page 27: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Active learning and surveillance

In network with thousands of cameras, which camera views should bepresented to the human operator?

AD () June 2011 27 / 35

Page 28: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Active learning and sensor networks

How doe we optimally choose among a subset of sensors in order toobtain the best understanding of the world while minimizing resourceexpenditure (power, bandwidth, distractions)?

AD () June 2011 28 / 35

Page 29: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Active learning example

AD () June 2011 29 / 35

Page 30: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Decision theoretic foundations of machine learning

Agents (humans, animals, machines) always have to make decisionsunder uncertainty

Uncertainty arises because we do not know the “true” state of theworld; e.g. because of limited field of view, because of “noise”, etc.

The optimal way to behave when faced with uncertainty is as follows

Estimate the state of the world, given the evidence you have seen upuntil now (this is called your belief state).Given your preferences over possible future states (expressed as a utilityfunction), and your beliefs about the effects of your actions, pick theaction that you think will be the best.

AD () June 2011 30 / 35

Page 31: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Decision making under uncertainty

You should choose the action with maximum expected utility

a∗t+1 = argmaxat+1∈A

∑xPθ (Xt+1 = x | a1:t+1, y1:t )Uφ (x)

where

Xt represents the “true” (unknown) state of the world at time ty1:t = y1, . . . , yt represents the data I have seen up until nowa1:t are the actions that I have taken up until nowU(x) is a utility function on state xθ are parameters of your world modelφ are parameters of your utility function

AD () June 2011 31 / 35

Page 32: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Decision making under uncertainty

We will later justify why your beliefs should be represented usingprobability theory.

It can also be shown that all your preferences can be expressed usinga scalar utility function (shocking, but true).

Machine learning is concerned with estimating models of the world(θ) given data, and using them to compute belief states (i.e.,probabilistic inference).

Preference elicitation is concerned with estimating models (φ) ofutility functions (we will not consider this issue).

AD () June 2011 32 / 35

Page 33: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Decision making under uncertainty: examples

Robot navigation: Xt=robot location, Yt = video image, At =direction to move, U(X ) = distance to goal.

Medical diagnosis: Xt = whether the patient has cancer, Yt =symptoms, At = perform clinical test, U(X ) = health of patient.

Financial prediction: Xt = true state of economy, Yt = observedstock prices, At = invest in stock i , U(X ) = wealth.

AD () June 2011 33 / 35

Page 34: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Tentative topics

Review of probability and Bayesian statistics

Linear and nonlinear regression and classification

Nonparametric and model-based clustering

Finite mixture and hidden Markov models, EM algorithms.

Graphical models.

Variational methods.

MCMC and particle filters.

PCA, Factor analysis

AD () June 2011 34 / 35

Page 35: Machine Learning - Waseda University Lecture 1: Introductionarnaud/waseda/lec1_waseda_handouts.pdfThe optimal way to behave when faced with uncertainty is as follows Estimate the state

Project

A list of potential projects will be listed shortly.

You can propose your own idea.

I expect the final project to be written like a conference paper.

AD () June 2011 35 / 35