machine learning lecture 1

77
Lecture Series on Machine Learning Ravi Gupta and G. Bharadwaja Kumar AU-KBC Research Centre, MIT Campus, Anna University

Upload: srinivasan-r

Post on 17-May-2015

12.539 views

Category:

Technology


1 download

DESCRIPTION

Machine learning lecture series by Ravi Gupta, AU-KBC in MIT

TRANSCRIPT

Page 1: Machine learning Lecture 1

Lecture Series on Machine Learning

Ravi Gupta and G. Bharadwaja KumarAU-KBC Research Centre,

MIT Campus, Anna University

Page 2: Machine learning Lecture 1

Books

Machine Learning by Tom M. Mitchell

Page 3: Machine learning Lecture 1

Lecture No. 1

Ravi GuptaAU-KBC Research Centre,

MIT Campus, Anna University

Date: 5.03.2008

Page 4: Machine learning Lecture 1

What is Machine Learning ???

Machine Learning (ML) is concerned with the question of how to construct computer programs that automatically improves with experience.

Page 5: Machine learning Lecture 1

When Machine Learning ???

• Computers applied for solving a complex problem

• No known method for computing output is present

• When computation is expensive

Page 6: Machine learning Lecture 1

When Machine Learning ???

• Classification of protein types based amino acid sequences

• Screening of credit applicants (who will default and who will not)

• Modeling a complex chemical reaction, when the precise interaction of the different reactants is unknown

Page 7: Machine learning Lecture 1

What Machine Learning does???

Decision Function/ Hypothesis

Page 8: Machine learning Lecture 1

What Machine Learning does???

Training Examples

Decision Function/ Hypothesis

Page 9: Machine learning Lecture 1

Supervised Learning

Apple

Decision Function/ Hypothesis

Orange

Supervised Classification

Page 10: Machine learning Lecture 1

Unsupervised Learning

Decision Function/ Hypothesis

Unsupervised Classification

Page 11: Machine learning Lecture 1

Binary Classification

Decision Function/ Hypothesis

Page 12: Machine learning Lecture 1

Multiclass Classification

Decision Function/ Hypothesis

Page 13: Machine learning Lecture 1

Application of Machine Learning• Speech and Hand Writing Recognition)• Robotics (Robot locomotion)• Search Engines (Information Retrieval)• Learning to Classify new astronomical structures• Medical Diagnosis• Learning to drive an autonomous vehicle• Computational Biology/Bioinformatics• Computer Vision (Object Detection algorithms)• Detecting credit card fraud• Stock Market analysis• Game playing• ………………..• ………………….

Page 14: Machine learning Lecture 1

Search Engines

Page 15: Machine learning Lecture 1

Sky Image Cataloging and Analysis Tool (SKI CAT) to Classify new astronomical structures

Page 16: Machine learning Lecture 1

ALVINN (Autonomous Land Vehicle In a Neural Network)

Page 17: Machine learning Lecture 1

SPHINX - A speech recognizer

Page 18: Machine learning Lecture 1

Disciplines that contribute to development ML Field

Page 19: Machine learning Lecture 1

Definition (Learning)

A computer program is said to learn from experience E with respect to some class of tasksT and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. (Tom Mitchell, 1998)

Page 20: Machine learning Lecture 1

Checker Learning Problem

A computer program that learns to play checkers might improve its performance as measured by its ability to win at the class of tasks involving playing checkers games, through experience obtained by playing games against itself

• Task T : playing checkers• Performance measure P: % of game won against opponents• Training experience E : playing practice game against itself

Page 21: Machine learning Lecture 1

A Handwritten Recognition Problem

• Task T : recognizing and classifying handwritten words within images

• Performance measure P: % of words correctly classified

• Training experience E : a database of handwritten words with given classifications

Page 22: Machine learning Lecture 1

A Robot Driving Learning Problem

• Task T : driving on public four-lane highways using vision sensors

Performance measure P: average distance traveled before an error (as judged by human overseer)

• Training experience E : a sequence of images and steering commands recorded while observing a human driver

Page 23: Machine learning Lecture 1

Designing a Learning System

Page 24: Machine learning Lecture 1

Checker Learning Problem

• Task T : playing checkers

• Performance measure P: % of game won against opponents

• Training experience E : playing practice game against itself

Page 25: Machine learning Lecture 1

Checker

Page 26: Machine learning Lecture 1

Choosing the Training Experience

The type of training experience E available to a system can have significant impact on success or failure of the learning system.

Page 27: Machine learning Lecture 1

Choosing the Training Experience

One key attribute is whether the training experience provides direct or indirect feedback regarding the choices made by the performance system

Page 28: Machine learning Lecture 1

Direct Learning

Page 29: Machine learning Lecture 1

Indirect Learning

Win / LossNext Move

Page 30: Machine learning Lecture 1

Second attribute is the degree to which the learner controls the sequence of training examples.

Next Move

Page 31: Machine learning Lecture 1

Another attribute is the degree to which the learner controls the sequence of training examples.

Player Next Move

Teacher / Expert

Player Next Move

Page 32: Machine learning Lecture 1

How well the training experience represents the distribution of examples over which the final system performance P must be measured.(Learning will be most reliable when the training example follow a distribution similar to that of future test examples)

Page 33: Machine learning Lecture 1

V. Anand

G. Kasparov

Page 34: Machine learning Lecture 1

Garry Kasparov playing chess with IBM's Deep Blue. Photo courtesy of IBM.

Page 35: Machine learning Lecture 1

Kasparov Monkey

Page 36: Machine learning Lecture 1

Geri’s Game by Pixar in which an elderly man enjoys a game of chess with himself (available at http://www.pixar.com/shorts/gg/theater/index.html)

Won an Academy Award for best animated short film.

Page 37: Machine learning Lecture 1

Assumptions

• Let us assume that our system will train by playing games against itself.

• And it is allowed to generate as much training data as time permits.

Page 38: Machine learning Lecture 1

Issues Related to Experience

What type of knowledge/experience should one learn??

How to represent the experience ? (some mathematical representation for experience)

What should be the learning mechanism???

Page 39: Machine learning Lecture 1

Main objective of the checker playing

Best Next Move

Page 40: Machine learning Lecture 1

Target Function

Choose Move: B → M

Choose Move is a function

where input B is the set of legal board statesand produces M which is the set of legal moves

Page 41: Machine learning Lecture 1

Output: M (Moves)

Input: Board State

Page 42: Machine learning Lecture 1

Target Function

Choose Move: B → M

M = Choose Move (B)

Page 43: Machine learning Lecture 1

Alternative Target Function

F : B → R

F is the target function, Input B is the board state and Output R denotes a set of real number

Page 44: Machine learning Lecture 1

Output: M (Moves)

Input: Board State

F = 10

F = 10.8

F = 14.3F = 7.7 F = 10

F = 6.3 F = 9.4

Page 45: Machine learning Lecture 1

Representation of Target Function

• xl: the number of white pieces on the board• x2: the number of red pieces on the board• x3: the number of white kings on the board• x4: the number of red kings on the board• x5: the number of white pieces threatened by red (i.e.,

which can be captured on red's next turn)• X6: the number of red pieces threatened by white

F'(b) = w0 + w1x1+ w2x2 + w3x3 + w4x4 + w5x5 + w6x6

Page 46: Machine learning Lecture 1

• Task T: playing checkers• Performance measure P: % of games won in the world

tournament• Training experience E: games played against itself• Target function: F : Board → R• Target function representation

F'(b) = w0 + w1x1+ w2x2 + w3x3 + w4x4 + w5x5 + w6x6

The problem of learning a checkers strategy reduces to theproblem of learning values for the coefficients w0 through w6 inthe target function representation

Page 47: Machine learning Lecture 1

Adjustment of Weights

( )

2train

,

( ) (F ( ) '( ))train bb F training examples

E Error b F b< >∈

≡ −∑

Page 48: Machine learning Lecture 1

Generic issues of Machine Learning

• What algorithms exist for learning general target functions fromspecific training examples?

• In what settings will particular algorithms converge to the desired function, given sufficient training data?

• Which algorithms perform best for which types of problems and representations?

• How much training data is sufficient?

• What is the best way to reduce the learning task to one or more function approximation problems?

Page 49: Machine learning Lecture 1

Generic issues of Machine Learning

• When and how can prior knowledge held by the learner guide the process of generalizing from examples?

• Can prior knowledge be helpful even when it is only approximately correct?

• What is the best strategy for choosing a useful next training experience, and how does the choice of this strategy alter the complexity of the learning problem?

• How can the learner automatically alter its representation to improve its ability to represent and learn the target function?

Page 50: Machine learning Lecture 1

Concept Learning

Page 51: Machine learning Lecture 1

Concept Learning

A task of acquiring a potential hypothesis (solution) thatbest fits the training examples

Page 52: Machine learning Lecture 1

Concept Learning Task

Objective is to learn EnjoySport

{Sky, AirTemp, Humidity, Wind, Water, Forecast} → EnjoySport

Tom enjoys his favorite water sports

Page 53: Machine learning Lecture 1

Concept Learning Task

Objective is to learn EnjoySport

{Sky, AirTemp, Humidity, Wind, Water, Forecast} → EnjoySport

Output

<x1, x2, x3, x4, x5, x6> → <y>

Input variables

Page 54: Machine learning Lecture 1

Notations

Page 55: Machine learning Lecture 1

NotationsInstances (X) Target Concept (C)

Training examples (D)

Page 56: Machine learning Lecture 1

Concept Learning

Acquiring the definition of a general category from a given set ofpositive and negative training examples of the category.

A hypothesis h in H such that h(x) = c(x) for all x in X

instance

Target concept

Instancehypothesis

Page 57: Machine learning Lecture 1

Instance Space

Suppose that the target hypothesis that we want to learn for the current problem is represented as a conjunction of all the attributes

Sky = .... AND AirTemp = …. AND Humidity= …. AND Wind = …. ANDWater= …. AND Forecast= …. THEN EnjoySport= ….

Page 58: Machine learning Lecture 1

Instance Space

Possible distinct instances = 3 * 2 * 2 * 2 * 2 * 2 = 96

Suppose the attribute Sky has three possible values, and that AirTemp, Humidity, Wind, Water, and Forecast each have two possible values

InstanceSpace

Page 59: Machine learning Lecture 1

Hypothesis Space

Hypothesis Space: A set of all possible hypotheses

Possible syntactically distinct Hypotheses for EnjoySport

= 5 * 4 * 4 * 4 * 4 * 4 = 5120

• Sky has three possible values• Fourth value don’t care (?)• Fifth value is empty set Ø

Page 60: Machine learning Lecture 1

Hypothesis Space

h = <Sunny, Warm, ?, Strong, Warm, Same>

Normal / High

Page 61: Machine learning Lecture 1

Hypothesis Space

h = <Ø, Warm, ?, Strong, Warm, Same>

Page 62: Machine learning Lecture 1

Concept Learning as Search

Concept learning can be viewed as the task of searching through a large space of hypothesis implicitly defined by the hypothesisrepresentation.

The goal of the concept learning search is to find the hypothesis that best fits the training examples.

Page 63: Machine learning Lecture 1

General-to-Specific Learning

Most General Hypothesis: h = <?, ?, ?, ?, ?, ?>

Every day Tom his enjoy i.e., Only

positive examples.

Most Specific Hypothesis: h = < Ø, Ø, Ø, Ø, Ø, Ø>

Page 64: Machine learning Lecture 1

General-to-Specific Learning

Consider the sets of instances that are classified positive by hl and by h2. h2 imposes fewer constraints on the instance, thus it classifies more instances as positive.

Any instance classified positive by hl will also be classified positive by h2. Therefore, we say that h2 is more general than hl.

Page 65: Machine learning Lecture 1

Definition

Given hypotheses hj and hk, hj is more_general_than_or_equal_tohk if and only if any instance that satisfies hk also satisfies hj.

We can also say that hj is more_specific_than hk when hk is more_general_than hj.

Page 66: Machine learning Lecture 1

More_general_than relation

Page 67: Machine learning Lecture 1

FIND-S: Finding a Maximally Specific Hypothesis

Page 68: Machine learning Lecture 1

Step 1: FIND-S

h0 = <Ø, Ø, Ø, Ø, Ø, Ø>

Page 69: Machine learning Lecture 1

Step 2: FIND-S

h0 = <Ø, Ø, Ø, Ø, Ø, Ø>

x1 = <Sunny, Warm, Normal, Strong, Warm, Same>

a1 a2 a3 a4 a5 a6

Iteration 1

h1 = <Sunny, Warm, Normal, Strong, Warm, Same>

Page 70: Machine learning Lecture 1

x2 = <Sunny, Warm, High, Strong, Warm, Same>

h2 = <Sunny, Warm, ?, Strong, Warm, Same>

h1 = <Sunny, Warm, Normal, Strong, Warm, Same>

Iteration 2

Page 71: Machine learning Lecture 1

Iteration 3 Ignore h3 = <Sunny, Warm, ?, Strong, Warm, Same>

Page 72: Machine learning Lecture 1

x4 = < Sunny, Warm, High, Strong, Cool, Change >

h4 = <Sunny, Warm, ?, Strong, ?, ?>

h3 = < Sunny, Warm, ?, Strong, Warm, Same >

Iteration 4

Step 3

Output

Page 73: Machine learning Lecture 1
Page 74: Machine learning Lecture 1

Key Properties of FIND-S Algorithm

• Guaranteed to output the most specific hypothesis within H that is consistent with the positive training examples.

• Final hypothesis will also be consistent with the negative examples provided the correct target concept is contained in H, and provided the training examples are correct.

Page 75: Machine learning Lecture 1

Unanswered Questions by FIND-S

• Has the learner converged to the correct target concept? Although FIND-S will find a hypothesis consistent with the training data, it has no way to determine whether it has found the only hypothesis in H consistent with the data (i.e., the correct target concept

• Why prefer the most specific hypothesis?In case there are multiple hypotheses consistent with the training examples, FIND-S will find the most specific. It is unclear whether we should prefer this hypothesis over, say, the most general, or some other hypothesis of intermediate generality.

Page 76: Machine learning Lecture 1

Unanswered Questions by FIND-S

• Are the training examples consistent?

In most practical learning problems there is some chance that the training examples will contain at least some errors or noise. Such inconsistent sets of training examples can severely mislead FIND-S, given the fact that it ignores negative examples. We would prefer an algorithm that could at least detect when the training data is inconsistent and, preferably, accommodate such errors.

Page 77: Machine learning Lecture 1

Unanswered Questions by FIND-S

• What if there are several maximally specific consistent hypotheses?

In the hypothesis language H for the EnjoySport task, there is always a unique, most specific hypothesis consistent with any set of positive examples. However, for other hypothesis spaces (which will be discussed later) there can be several maximally specific hypotheses consistent with the data.