introduction to machine...

43
Introduction to Machine Learning Introduction to ML - TAU 2016/7 1

Upload: others

Post on 26-May-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Introduction to Machine Learning

Introduction to ML - TAU 2016/7 1

Page 2: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Course Administration

• Lecturers:• Amir Globerson

([email protected])

• Yishay Mansour ([email protected])

• Teaching Assistance:• Regev Schweiger

([email protected])

• Course “responsibilities”

• Home works• 20 % of the grade

• Done in pairs

• Final exam• 80% of the grade

• February 1st, 2017

• Done alone

Introduction to ML - TAU 2016/7 2

Page 3: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Today’s lecture

• Overview of Machine Learning and the course

• Motivation for ML

• Basic concepts

• Birds eye view of ML• and the material we will cover

• Outline of the course

Introduction to ML - TAU 2016/7 3

Page 4: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 4

Page 5: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 5

Page 6: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 6

Page 7: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 7

Page 8: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 8

Page 9: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 9

Page 10: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 10

Page 11: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 11

Page 12: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Machine learning is everywhere

• Web search

• Speech recognition

• Language translation

• Image recognition

• Recommendation

• Self-Driving car

• Drowns

• Basic technology in many applications.

Introduction to ML - TAU 2016/7 12

Page 13: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Typical ML tasks

• Classification• Domain: instances with labels

• Emails: spam/ not spam

• Patients: ill/healthy

• Credit card: legitimate/fraud

• Input: training data• Examples from the domain

• Selected randomly

• Output: predictor• Classifies new instances

• Supervised learning

• Labels:• Binary, discrete, continuous

• Multiple/unique label per instance• Multiple labels:

• News categories

• Predictors:• Linear classifiers

• Decision Trees

• Neural networks

• Nearest Neighbor

Introduction to ML - TAU 2016/7 13

Page 14: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Typical ML tasks

• Clustering • No observed label

• Many time label is “hidden”

• Examples:• User modeling

• Documents classification

• Input: training data

• Output: clustering• Partitioning the space

• Somewhat fuzzy goal

• Unsupervised learning

• Control• Learner affects the environment

• Examples:• Robots

• Driving cars

• Interactive environment• What you see depends on what you do

• Exploration/exploitation

• Reinforcement Learning• Not in the scope of this class

Introduction to ML - TAU 2016/7 14

Page 15: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Why do we need Machine Learning

• Some tasks are simply to difficult to program

• Machine learning gives a conceptual alternative to programming

• Becoming easier to justify over the years

• What is ML:

Arthur Samuel (1959):

“Field of study that gives computers the ability to learn without being explicitly programmed”

Introduction to ML - TAU 2016/7 15

Page 16: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Building a ML model

• Domain:• Instances: attributes and labels

• There is no replacement to having the “right” attributes

• Distribution:• How are instances generated.

• Basic assumption:• Future ≈ Past

• Independence assumption

• Sample• Training/Testing

• Hypothesis:• How are we classifying

• Our output • {0,1} or [0,1]

• Comparing hypotheses:• More accurate is better

• Tradeoffs: many small errors or a few large errors?

• False positive / False negative

Introduction to ML - TAU 2016/7 16

Page 17: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

ML model: complete information

• Notation:• x = instance attributes

• y = instance label

• D(x,y) joint distribution

• Assume we know D !• Specifically: dist D+ and D-

D = λD+ + (1-λ) D-

• Binary classification• Two labels + and –

• Task: Given x predicts y• Compute Pr[+|x]

• Bayes rule:

• Pr + 𝑥 =Pr 𝑥 + Pr[+]Pr[𝑥]

=𝐷+ 𝑥 𝜆

𝐷+ 𝑥 𝜆+𝐷− 𝑥 (1−𝜆)

Introduction to ML - TAU 2016/7 17

Page 18: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

ML model: complete information

• How do we predict• We can computer Pr[+|x]

• Depends on our goal!

• Minimizing errors:• 0-1 Loss

• If Pr[+|x] > Pr[-|x] Then + Else –

• Insensitive to probabilities• 0.99 and 0.51 are predicted the same

• Absolute loss• Our output: real number q

• Loss |y-q|

• p= Pr[+|x]

• y= 1 (for +) or 0 (for -)

• Expected quadratic loss:

L(q) = p(1-q)+(1-p)q

What is the optimal q ?!• Still predict q=1 when p >1/2

Introduction to ML - TAU 2016/7 18

Page 19: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

ML model: complete information

• Quadratic Loss• Our output: real number q

• Loss (y-q)2

• p= Pr[+|x]

• y= 1 (for +) or 0 (for -)

• Expected quadratic loss:

L(q) = p(1-q)2+(1-p)q2

What is the optimal q ?!• We know p !

• Minimizing the loss:

• Compute the derivative

L’(q) = -2p(1-q)+2(1-p)q

L’(q) q=p

L’’(q) = 2p+2(1-p)=2 >0

We found a minimum at q=p

Introduction to ML - TAU 2016/7 19

Page 20: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

ML model: complete information

• Logarithmic Loss• Our output: real number q

• p= Pr[+|x]

• Loss:• For y=+ : - log (q)

• For y=- : - log (1-q)

• Expected logarithmic loss:

L(q) = -p log(q)-(1-p)log(1-q)

What is the optimal q ?!• We know p !

• Minimizing the loss:

• Compute the derivative

𝐿′ 𝑞 =−𝑝

𝑞+(1 − 𝑝)

1 − 𝑞

L’(q) =0 q=p

𝐿′′ 𝑞=𝑝

𝑞2+1 − 𝑝

1 − 𝑞 2> 0

We found a minimum at q=p

Introduction to ML - TAU 2016/7 20

Page 21: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Estimating hypothesis error

• Consider a hypothesis h• Assume h(x) ε {0,1}• Error: Pr[h(x)≠y]

• How can we estimate the error?

• Take a sample S from D.• 𝑆 = { 𝑥𝑖 , 𝑦𝑖 : 1 ≤ 𝑖 ≤ 𝑚}

• Observed error:

•1

𝑚 𝑖=1𝑚 𝐼(ℎ 𝑥𝑖 ≠ 𝑦𝑖)

• How accurate is the observed error.

• Laws of large numbers:• Converges in the limit

• We want accuracy for a finite sample• Tradeoff: error vs. sample size

Introduction to ML - TAU 2016/7 21

Page 22: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Generalization bounds

• Markov inequality

For r.v. Z >0

Pr 𝑍 > 𝛼 ≤𝐸[𝑍]

𝛼

• Chebyshev inequality

Pr 𝑍 − 𝐸 𝑍 ≥ 𝛽 ≤𝑉𝑎𝑟(𝑍)

𝛽2

Recall𝑉𝑎𝑟 𝑍 = 𝐸 𝑍 − 𝐸 𝑍 2

= 𝐸 𝑍2 − 𝐸2[𝑍]

Introduction to ML - TAU 2016/7 22

Page 23: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Generalization bounds

• Chernoff/Hoeffding inequality

Zi i.i.d. bernolli r.v. w.p. p: E[Zi]=p

Pr[|1

𝑚 𝑖=1

𝑚

𝑍𝑖 − 𝑝 ≥ 𝜖 ≤ 2𝑒𝑥𝑝 −2𝜖2𝑚

A quantitative central limit theorem.

• How to think about the bound

• Suppose we can set the sample size m.

• The accuracy parameter is ε

• We want it to hold with high probability.• Probability 1-δ

𝛿 = 2𝑒−2𝜖2𝑚

𝑚 =1

2𝜖2log(2

𝛿)

Introduction to ML - TAU 2016/7 23

Page 24: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Hypothesis class

• Hypothesis class: set of functions used for prediction• Implicitly assumes something

about the true labels

• Realizable case:

𝑓 𝑥 =

𝑖=1

𝑑

𝑎𝑖𝑥𝑖

Can solve d linear ind. Eq.No solution incorrect class 0

0.5

1

1.5

2

2.5

3

0 0.5 1 1.5 2 2.5 3 3.5

Y-Values

Introduction to ML - TAU 2016/7 24

Page 25: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Hypothesis class: Regression

• Hypothesis class:

ℎ 𝑥 =

𝑖=1

𝑑

𝑎𝑖𝑥𝑖

Only a restriction on our predictor

Look for the best linear estimator.

Need to specify a loss!

Called: Linear Regression

Introduction to ML - TAU 2016/7 25

0

0.5

1

1.5

2

2.5

3

0 0.5 1 1.5 2 2.5 3 3.5

Y-Values

Page 26: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Hypothesis class: Classification

• Many possible classes• Linear separator

ℎ 𝑥 = 𝑠𝑖𝑔𝑛( 𝑖=1𝑑 𝑎𝑖𝑥𝑖)

• Neural Network (deep learning)• Multiple layers

• Decision Trees

• Memorizing the sample• Does it make sense?

Introduction to ML - TAU 2016/7 26

X1>3

X5>9 X9< 2

Page 27: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Hypothesis class: Classification

• PAC model:

• Given a hypothesis class H

• Find h ε H with near optimal loss• Accuracy: ε

• Confidence: δ

• Error of hypothesis: Pr[h(x)≠y]

• Observed error: 1

𝑚 𝑖=1𝑚 𝐼(ℎ 𝑥𝑖 ≠ 𝑦𝑖)

• Empirical risk minimization:• Select h ε H with lowest observed

error

• What should we expect about the true error of h ?• overfitting

• Generalization ability• The difference between true and

observed error

Introduction to ML - TAU 2016/7 27

Page 28: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Hypothesis class: Generalization

• Why is the single hypothesis bound not enough?

The difference between:• For every h: Pr[ obs-error > ε] <δ• Pr[for every h: obs-error > ε] <δ

• We need the bound to hold for all hypotheses simultaneously

• Generalization bounds:• How do they look like:

• Finite class H:

𝑚 =1

2𝜖2log(2|𝐻|

𝛿)

• Infinite class• VC dimension replaces log(H)• Approximately the number of free

parameters• Hyperplane in d dimension has VC-

dimension = d+1

Introduction to ML - TAU 2016/7 28

Page 29: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Model Selection: complexity vs generalization

• How to select from a “rich” hypotheses class

• Examples: intervals, polynomials

• How can we approach it systematically?

• Adding a penalty• Bound on overfitting

• SRM

• Using a prior• MDL

Introduction to ML - TAU 2016/7 29

erro

r

complexity

training

testing

Page 30: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Bayesian Inference

• A complete “world”

• Hypothesis is selected from a prior• Prior is a distribution over F

• Class of target functions

• Given a sample can update• Compute a posterior

• Given an instance• Predict using the posterior

• Using Bayes rule:

Pr 𝑓 𝑆 =Pr 𝑆 𝑓 Pr[𝑓]

Pr[𝑆]

• Pr[f] : prior

• Pr[f|S] : posterior

• Pr[S] : prob. Data (normalization)

Introduction to ML - TAU 2016/7 30

Page 31: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Bayesian Inference

• Using the posterior• Compute exact posterior

• Computationally extensive

• Maximum A Posteriori (MAP)• Single most likely hypothesis

• Maximum Likelihood (ML)• Single hypothesis that maximizes

prob of observation.

• Where do we get the prior from?

• Bayesian Networks:• A compact joint dist.

• Naïve Bayes• Attributes independent given label

Introduction to ML - TAU 2016/7 31

Page 32: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Nearest neighbor

• A most intuitive approach

• No hypotheses class• The raw data is the hypothesis

• Given a point x• Find the k nearest points

• Need an informative metric

• Take a weighted average of their label

• Performance:• Generalization ability

• Bad examples:

• Reasonable in practice

• Requires normalization (metric)• Measuring height in cm or meters?

Introduction to ML - TAU 2016/7 32

Page 33: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

How can we normalize?

• Actually, data dependent.

• For now individual attributes.

• For a Gaussian like:• xi (xi-μ)/σ• Mapping N(μ,σ) N(0,1)

• For uniform like• xi (xi –xmin) / (xmax – xmin)• Mapping [a,b] [0,1]

• For exponential like• xi xi / λ• Mapping exp(λ) exp(1)

• What about categorical attributes?• Color

• Mahalanobis distance• Multi-attribute Gaussian.

Introduction to ML - TAU 2016/7 33

Page 34: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

ML and optimization

• Optimization is an important part of ML• Both local and global optima• critical in minimizing the loss

• ERM

• Efficient methods• What is efficient varies

• Linear and convex optimization

• Gradient based methods

• Overcoming computational hardness

• Example 1:• Minimizing error with a

hyperplane is hard (NP-hard)• Minimizing a hinge loss is a linear

program• Hinge loss is the distance from the

threshold

• Example 2: Sparsity

• Example 3: Kernels

Introduction to ML - TAU 2016/7 34

Page 35: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Unsupervised learning

• We do not have labels• Think: user’s behavior

• Some application:• Labels not observable

• Somewhat ill define:• You can cluster in many ways

• Example

• Goal:• To better understand the data

• Segment the data

• Less emphasis on generalization

• Toy example (2D)

Introduction to ML - TAU 2016/7 35

Page 36: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Clustering: Unsupervised learning

Page 37: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Unsupervised learning: Clustering

Page 38: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Unsupervised learning

• Goal: recover a hidden structure.• Use it (maybe) to predict

• Method: assume a generative structure• Recover the parameters

• Example:• Mixture of k Gaussian distributions

• Each Gaussian should be a cluster

• Recover using k-means

Introduction to ML - TAU 2016/7 38

Page 39: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Unsupervised learning: k-means

• Start with k random points as centers

• For each point, compute the nearest center

• Re-estimate each center• The average of the points it

generates

• Continue until no more changes

• Special case of Exp-Max Algo.

• Objective function:• Sum of distances from points to

their center

• Algorithm terminates• Objective cannot decrease

• Running time• There are bad examples• Works fairly fast in practice.

Introduction to ML - TAU 2016/7 39

Page 40: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Singular value Decomposition (SVD)

• SVD• X data matrix

• Row are attributes• Columns are instances• XXt correlation matrix

• SVD: X = UDVt

• D diagonal• eigenvalues

• Low rank approximation• Truncate D• Optimal in Frobenius norm

• Recommendations• Rows users• Columns movies• Entries: rating

• predictor:• Each user has a vector u• Each movie has a vector v• Rating is <u,v>

• SVD• Recover the user and movie

vectors.

Introduction to ML - TAU 2016/7 40

Page 41: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Principal Component Analysis (PCA)

• Dimension reduction• d d’ , d >> d’

• Intuition: • Try to capture the “structure”

• Objective: minimize the reconstruction error:• U dxd’ orthonormal matrix

• x Utx s.t. UUt x ≈ x

• PCA:• Min 𝑖=1

𝑚 ||𝑥𝑖 − 𝑈𝑈𝑡𝑥𝑖||2

Introduction to ML - TAU 2016/7 41

IRISData set2D PCA

Page 42: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

Structure of the Course

1. Introduction

2. PAC: Generalization

3. PAC: VC bounds

4. Perceptron: linear classifier

5. Support Vector Machine SVM

6. Kernels

7. Stochastic Gradient Decent

8. Boosting

9. Decision Trees

10. Regression

11. Generative Models, EM

12. Clustering, k-means

13. PCA

Introduction to ML - TAU 2016/7 42

Page 43: Introduction to Machine Learningml-intro-2016.wdfiles.com/local--files/course-schedule/lecture1.pdf · Today’s lecture •Overview of Machine Learning and the course •Motivation

ML books

Introduction to ML - TAU 2016/7 43