learning learning from observations. learning course overview introduction intelligent agents ...

90
Learning Learning from Learning from Observations Observations

Upload: marshall-bryan

Post on 11-Jan-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning from ObservationsLearning from Observations

Page 2: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Course Overview

Introduction

Intelligent Agents

Search problem solving through

search

informed search

Games games as search

problems

Knowledge and Reasoning reasoning agents propositional logic predicate logic knowledge-based systems

Learning learning from observation neural networks

Conclusions

Page 3: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Chapter Overview Learning

Motivation

Objectives

Learning from Observation Learning Agents

Inductive Learning

Learning Decision Trees

Computational Learning Theory

Probably Approximately Correct (PAC) Learning

Learning in Neural Networks Neurons and the Brain Neural Networks Perceptrons Multi-layer Networks Applications

Important Concepts and Terms

Chapter Summary

Page 4: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Bridge-InBridge-In

“knowledge infusion” is not always the best way of providing an agent with knowledge

impractical, tedious

incomplete, imprecise, possibly incorrect

adaptivity an agent can expand and modify its knowledge base to

reflect changes

improved performance through learning the agent can make better decisions

autonomy without learning, an agent can hardly be considered

autonomous

Page 5: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

MotivationMotivation

learning is important for agents to deal with unknown environments

changes

the capability to learn is essential for the autonomy of an agent

in many cases, it is more efficient to train an agent via examples, than to “manually” extract knowledge from the examples, and “instill” it into the agent

agents capable of learning can improve their performance

Page 6: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

ObjectivesObjectives

be aware of the necessity of learning for autonomous agents

understand the basic principles and limitations of inductive learning from examples

apply decision tree learning to deterministic problems characterized by Boolean functions

understand the basic learning methods of perceptrons and multi-layer neural networks

know the main advantages and problems of learning in neural networks

Page 7: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

LearningLearning

an agent tries to improve its behavior through observation, reasoning, or reflection

learning from experience

memorization of past percepts, states, and actions

generalizations, identification of similar experiences

forecasting

prediction of changes in the environment

theories

generation of complex models based on observations and reasoning

Page 8: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning from ObservationLearning from Observation

Learning Agents

Inductive Learning

Learning Decision Trees

Page 9: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning AgentsLearning Agents

based on previous agent designs, such as reflexive, model-based, goal-based agents

those aspects of agents are encapsulated into the performance element of a learning agent

a learning agent has an additional learning element usually used in combination with a critic and a problem

generator for better learning

most agents learn from examples inductive learning

Page 10: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning Agent ModelLearning Agent Model

Sensors

Effectors

Performance Element

Critic

Learning Element

Problem Generator

Agent

Environment

Performance Standard

Feedback

LearningGoals

Changes

Knowledge

Page 11: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Forms of LearningForms of Learning

supervised learning an agent tries to find a function that matches examples from

a sample set each example provides an input together with the correct output

a teacher provides feedback on the outcome the teacher can be an outside entity, or part of the environment

unsupervised learning the agent tries to learn from patterns without corresponding

output values

reinforcement learning the agent does not know the exact output for an input, but it

receives feedback on the desirability of its behavior the feedback can come from an outside entity, the environment,

or the agent itself the feedback may be delayed, and not follow the respective

action immediately

Page 12: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

FeedbackFeedback

provides information about the actual outcome of actions

supervised learning both the input and the output of a component can be

perceived by the agent directly

the output may be provided by a teacher

reinforcement learning feedback concerning the desirability of the agent’s behavior

is available

may not be directly attributable to a particular action feedback may occur only after a sequence of actions

the agent or component knows that it did something right (or wrong), but not what action caused it

Page 13: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Prior KnowledgePrior Knowledge

background knowledge available before a task is tackled

can increase performance or decrease learning time considerably

many learning schemes assume that no prior knowledge is available

in reality, some prior knowledge is almost always available

but often in a form that is not immediately usable by the agent

Page 14: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Inductive LearningInductive Learning

tries to find a function h (the hypothesis) that approximates a set of samples defining a function f

the samples are usually provided as input-output pairs (x, f(x))

supervised learning method

relies on inductive inference, or induction conclusions are drawn from specific instances to more

general statements

Page 15: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

HypothesesHypotheses finding a suitable hypothesis can be difficult since the function f is unknown, it is hard to tell if the

hypothesis h is a good approximation

the hypothesis space describes the set of hypotheses under consideration

e.g. polynomials, sinusoidal functions, propositional logic, predicate logic, ...

the choice of the hypothesis space can strongly influence the task of finding a suitable function

while a very general hypothesis space (e.g. Turing machines) may be guaranteed to contain a suitable function, it can be difficult to find it

Ockham’s razor: if multiple hypotheses are consistent with the data, choose the simplest one

Page 16: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Example Inductive Learning Example Inductive Learning 11

input-output pairs displayed as points in a plane

the task is to find a hypothesis (functions) that connects the points

either all of them, or most of them

various performance measures

number of points connected

minimal surface

lowest tension

x

f(x)

Page 17: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Example Inductive Learning Example Inductive Learning 22

x

f(x)

Page 18: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Example Inductive Learning Example Inductive Learning 33

hypothesis expressed as a polynomial function

incorporates all samples

more complicated to calculate than linear segments

no discontinuities

better predictive power

x

f(x)

Page 19: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Example Inductive Learning Example Inductive Learning 44

hypothesis is a linear functions

does not incorporate all samples

extremely easy to compute

low predictive power

x

f(x)

Page 20: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Inductive Learning ModelsInductive Learning Models

Neural Networks

Linear Regression Model

Logistic Regression Model

Page 21: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Neural NetworksNeural Networks

complex networks of simple computing elements

capable of learning from examples with appropriate learning methods

collection of simple elements performs high-level operations

thought

reasoning

consciousness

Page 22: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Neural Networks and the Neural Networks and the BrainBrain

brain set of interconnected modules

performs information processing operations at various levels sensory input analysis

memory storage and retrieval

reasoning

feelings

consciousness

neurons basic computational elements

heavily interconnected with other neurons

[Russell & Norvig, 1995]

Page 23: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Neuron DiagramNeuron Diagram

soma cell body

dendrites incoming

branches

axon outgoing branch

synapse junction between

a dendrite and an axon from another neuron[Russell & Norvig, 1995]

Page 24: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Computer vs. BrainComputer vs. Brain

Computer Brain

Computational units1-1000 CPUs107 gates/CPU

1011 neurons

Storage units1010 bits RAM1011 bits disk

1011 neurons1014 synapses

Cycle time 10-9 sec (1GHz) 10-3 sec (1kHz)

Bandwidth 109 sec 1014 sec

Neuron updates/sec 105 1014

Page 25: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Computer Brain vs. Cat BrainComputer Brain vs. Cat Brain

in 2009 IBM makes a supercomputer significantly smarter than cat

“IBM has announced a software simulation of a mammalian cerebral cortex that's significantly more complex than the cortex of a cat. And, just like the actual brain that it simulates, they still have to figure out how it works.”

http://arstechnica.com/science/news/2009/11/ibm-makes-supercomputer-significantly-smarter-than-cat.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

http://static.arstechnica.com/cat_computer_ars.jpg

Page 26: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Google Neural NetworkGoogle Neural Network

What does a really large NN learn from watching Youtube videos for one week?

NN implementation computation spread across 16,000 CPU cores

more than 1 billion connections in the NN

http://googleblog.blogspot.com/2012/06/using-large-scale-brain-simulations-for.html

Page 27: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Cat DiscoveryCat Discovery

“cat” discovery in NN learned to identify a category

of images with cats

Google blog post

https://plus.google.com/u/0/+ResearchatGoogle/posts/EMyhnBetd2F

published paper

http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/unsupervised_icml2012.pdf

http://1.bp.blogspot.com/-VENOsYD1uJc/T-nkLAiANtI/AAAAAAAAJWc/2KCTl3OsI18/s1600/cat+detection.jpeg

Page 28: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Artificial Neuron DiagramArtificial Neuron Diagram

weighted inputs are summed up by the input function

the (nonlinear) activation function calculates the activation value, which determines the output

[Russell & Norvig, 1995]

Page 29: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Common Activation FunctionsCommon Activation Functions

Stept(x) = 1 if x >= t, else 0

Sign(x) = +1 if x >= 0, else –1

Sigmoid(x) = 1/(1+e-x)

[Russell & Norvig, 1995]

Page 30: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Neural Networks and Logic Neural Networks and Logic GatesGates

simple neurons with can act as logic gates appropriate choice of activation function, threshold, and

weights step function as activation function

[Russell & Norvig, 1995]

Page 31: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Network StructuresNetwork Structures

in principle, networks can be arbitrarily connected occasionally done to represent specific structures

semantic networks

logical sentences

makes learning rather difficult

layered structures networks are arranged into layers

interconnections mostly between two layers

some networks have feedback connections

Page 32: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

PerceptronsPerceptrons single layer, feed-

forward network

historically one of the first types of neural networks

late 1950s

the output is calculated as a step function applied to the weighted sum of inputs

capable of learning simple functions

linearly separable[Russell & Norvig, 1995]

Page 33: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

[Russell & Norvig, 1995]

Perceptrons and Linear Perceptrons and Linear SeparabilitySeparability

perceptrons can deal with linearly separable functions

some simple functions are not linearly separable XOR function

0,0

0,1

1,0

1,1

0,0

0,1

1,0

1,1

AND XOR

Page 34: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Perceptrons and Linear Perceptrons and Linear SeparabilitySeparability

linear separability can be extended to more than two dimensions

more difficult to visualize

[Russell & Norvig, 1995]

Page 35: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Perceptrons and LearningPerceptrons and Learning

perceptrons can learn from examples through a simple learning rule

calculate the error of a unit Erri as the difference between the

correct output Ti and the calculated output Oi

Erri = Ti - Oi

adjust the weight Wj of the input Ij such that the error

decreases

Wij := Wij + *Iij * Errij

is the learning rate

this is a gradient descent search through the weight space

lead to great enthusiasm in the late 50s and early 60s until Minsky & Papert in 69 analyzed the class of representable functions and found the linear separability problem

Page 36: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Generic Neural Network Generic Neural Network LearningLearning

basic framework for learning in neural networks

function NEURAL-NETWORK-LEARNING(examples) returns network network := a network with randomly assigned weights for each e in examples do O := NEURAL-NETWORK-OUTPUT(network,e) T := observed output values from e update the weights in network based on e, O, and Treturn network

adjust the weights until the predicted output values O and the observed values T agree

Page 37: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

NotationNotation

Output of j-th layer: a(j) input of (j+1)-th layer

Input vector: I(0)

Final output: Oi

Weighted sum of i-th sigmoid unit in the j-th layer: ini(j)

Number of sigmoid units in j-th layer: mj

44

( ) ( ) ( 1)j j ji iin a W

)(,1

)(,

)(,1

)(

)1(,...,,..., j

imjil

ji

ji j

www W

Ik

aj

Oi

Wji

Wkj

( 1)jin

Page 38: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Objective functionObjective function

Objective function for Training NN

Gradient Descent Algorithm

Loss function

2

1

( ) ( )N

k w kk

L W y h x

1

( )kk k

L WW W

W

2( ) ( )k w kLoss W y h x

Ik

aj

Oi

Wji

Wkj

Page 39: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

46

The Backpropagation MethodThe Backpropagation Method

Gradient of Wi(j) :

Weight update:

( ) ( ) ( 1)j j ji iin a W

2

( ) ( ) ( )

( ) ( )2( )k k k k

k kj j ji i i

Loss y O g iny O

in in in

)j()j(i

)j(i

)j(i

1 XWW

)j()j(i

)j()j(

i

)j()j(

i)j(

i

)j(i

)j(i

)j(i

in

g)ay(

in

Lossin

in

LossLoss

11

1

22

XX

XWW

Local gradient

Page 40: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

47

Weight Changes in the Final LayerWeight Changes in the Final Layer

Local gradient:

Weight update:

( )( )

( )

( ) (1 )

kj

i

gy O

s

y O g g

( ) ( ) ( ) ( 1)( ) (1 )k k k ky O g g W W X

Ik

aj

Oi

Wji

Wkj

Page 41: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

48

Weights in Intermediate LayersWeights in Intermediate Layers

Local gradient:

The final ouput h(w) depends on ini(j) through of

the summed inputs to the sigmoids in the (j+1)-th layer.

Need for computation of

1

1

1 1

( )( )

( 1)( 1) ( 1)1

( 1) ( ) ( 1) ( ) ( 1) ( )1

( 1) ( 1)( 1)

( 1) ( )1 1

( )

( ) ... ...

( )

j

j

j j

ji j

i

jj jml

j j j j j ji l i m i

m mj jjl l

lj jl ll i i

gy O

in

ing in g in gy O

in in in in in in

g in iny O

in in in

( )j

( 1)

( )

jl

ji

in

in

Page 42: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Weight Update in Hidden Weight Update in Hidden LayersLayers

v i : v = i :

Conseqeuntly,

49

1( 1) ( 1) ( ) ( 1) ( )

,1

1 ( 1) ( ) 1( 1) ( ),1 ( 1)

,( ) ( ) ( )1

( 1) ( ) ( ), (1 )

j

jj

mj j j j j

l l v l vv

m j j mj jv l vv jl v

v lj j jvi i i

j j ji l i i

in a w a

w ain aw

in in in

w g g

W

0

)j(i

)j(v

in

a)g(g

in

a )j(v

)j(v)j(

i

)j(v

1

1

1

111jm

l

)j(l.i

)j(l

)j(i

)j(i

)j(i w)g(g

Ik

aj

Oi

Wji

Wkj

Page 43: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Weight Update in Hidden Layers Weight Update in Hidden Layers

Attention to recursive equation of local gradient!

Backpropagation:

Error is back-propagated from the output layer to the input layer

Local gradient of the latter layer is used in calculating local gradient of the former layer.

50

( ) (1 )( )k g g y O

1

1

111jm

l

)j(l.i

)j(i

)j(i

)j(i

)j(i w)g(g

Page 44: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

51

3.3.5 (cont.)

Example (even parity function)

Learning rate: 1.0

input target

1 0 1 0

0 0 1 1

0 1 1 0

1 1 1 1

2121 xxxxf

Page 45: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Generalization, Accuracy, OverfittingGeneralization, Accuracy, Overfitting

Generalization ability: NN appropriately classifies vectors not in the training

set. Measurement = accuracy

Curve fitting Number of training input vectors number of degrees of

freedom of the network. In the case of m data points, is (m-1)-degree polynomial

best model? No, it can not capture any special information.

Overfitting Extra degrees of freedom are essentially just fitting the

noise. Given sufficient data, the Occam’s Razor principle

dictates to choose the lowest-degree polynomial that adequately fits the data.

52

Page 46: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

53

Overfitting Overfitting

Page 47: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

ApplicationsApplications

domains and tasks where neural networks are successfully used

handwriting recognition, speech recognition

control problems

juggling, truck backup problem

series prediction

weather, financial forecasting

categorization

sorting of items (fruit, characters, phonemes, …)

Page 48: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Inductive Learning ModelsInductive Learning Models

Neural Networks

Linear Regression Model

Logistic Regression Model

Page 49: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Linear RegressionLinear Regression

Data is the predictor (regressor, covariate, feature, independent

variable)

is the response (dependent variable, outcome)

We denote the regression function by

This is the conditional expectation of Y given x.

The linear regression model assumes a specific linear form for

which is usually thought of as an approximation to the truth.

)|()( xYEx

xx )(

1 1 2 2( , ), ( , ), , ( , )N Nx y x y x yix

iy

Page 50: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

23/4/21Linear Methods for

Regression

57

Fitting by least squaresFitting by least squares

Minimize:

Solutions are

are called the fitted or predicted values

are called the residuals

N

i ii xy1

20,0 )(minargˆ,ˆ

0

xy

xx

yxxN

j i

N

j ii

ˆˆ)(

)(ˆ

0

1

2

1

ii xy ˆˆˆ 0

iii xyr ˆˆ0

Page 51: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Confidence intervalsConfidence intervals

We often assume further that

Under additional assumption of normality for the , a 95% confidence interval for is:

).2/()ˆ(ˆ)(

)ˆ(

)(0)(

222

2/1

2

2

20

Nyyσxx

se

VarExy

ii

i

ii

iii

by Estimate

.Then and where

si

2/1

2

2

)(

ˆ)ˆ(ˆ),ˆ(ˆ96.1ˆ

xx

esesi

Page 52: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

23/4/21Linear Methods for

Regression

59

Fitted regression line with pointwise standard errors:

)(ˆse2)(ˆ xx

Page 53: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Multiple linear regressionMultiple linear regression

Model is

Equivalently in matrix notation:

f is N-vector of predicted values

X is N × p matrix of regresses, with ones in the first column

is a p-vector of parameters

1

0 1( )

P

i ij jjf x x

Xf

Page 54: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Estimation by least squaresEstimation by least squares

)()min(arg

)(minargˆ 21

10

XyXy

xy

Ti

p

j jiji

21

1

)XX()ˆ(Var

ˆXy

yX)XX(ˆ

T

TT

Also

is Solution

Page 55: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Inductive Learning ModelsInductive Learning Models

Neural Networks

Linear Regression Model Linear Regression for Classification

Logistic Regression Model

Page 56: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Example: USPS Digit RecognitionExample: USPS Digit Recognition

63

Page 57: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Linear RegressionLinear Regression

64

Page 58: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

23/4/21Linear Classifiers65

Linear RegressionLinear Regression

The response classes is coded by a indicator variable. A K classes depend on K indicator variables, as Y(k), k=1,2,…, K, each indicate a class. And N train instances of the indicator vector could form a indicator response matrix YY.

To the Data set X(k), there is map:

X X : X(k) Y(k)

According to the linear regression model:

f(x)=(1,XXT)B^

Y Y ^=XX(XXTXX)-1XXTYY

Page 59: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Linear RegressionLinear Regression

xfxG kGk

maxarg

Given X, the classification should be:

In another form, a target tk is constructed for each class, the tk

presents the kth column of a K identity matrix, according to the a sum-of-squared-norm criterion :

and the classification is:

N

i

Tii

BBxy

1

2,1min

2

kGk txfminargxG

Page 60: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

23/4/21

Logistic RegressionLogistic Regression

Model:

1

1 0,

1

1 0,

0,

20,

10,1

)exp(1

1)Pr(

1,,1,)exp(1

)exp()Pr(

)Pr(

)1Pr(

)Pr(

)1Pr(

K

l

Tll

K

l

Tll

Tkk

Tk

T

xxXKg

Kkx

xxXkg

xxXKg

xXKgLog

xxXkg

xXgLog

Page 61: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Logistic Regression con’tLogistic Regression con’t

Parameters estimation Objective function

Parameters estimation

IRLS (iteratively reweighted least squares)

Particularly, for two-class case, using Newton-Raphson algorithm to solve the equation, the objective function:

)|(Prlogmaxarg1 ii

N

ixy

)),(log()(),(log)( ii

N

i ii xpyxpyl 11

1

),x(p)x|y(Pr);x|y(Pr),x(p 101

;)exp(

)exp()|(Pr),(

x

xxyxp

T

T

1

1

Page 62: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Logistic Regression con’tLogistic Regression con’t

N

iii

TiiT

xpxpxxl

1

2

1 ));()(;()(

N

i

xi

Ti

ii

N

i ii

iT

exy

xpyxpyl

1

1

1

11

)}log({

)),(log()(),(log)(

;)exp(

)exp()|(Pr),(

x

xxyxp

T

T

1

1

01

N

iiii xpyx

l));((

)(

Page 63: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Logistic Regression con’tLogistic Regression con’t

)()( ll

Toldnew

12

01

N

iiii xpyx

l));((

)(

N

iii

TiiT

xpxpxxl

1

2

1 ));()(;()(

Page 64: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning and Decision TreesLearning and Decision Trees

based on a set of attributes as input, predicted output value, the decision is learned

it is called classification learning for discrete values

regression for continuous values

Boolean or binary classification output values are true or false

conceptually the simplest case, but still quite powerful

making decisions a sequence of test is performed, testing the value of one of

the attributes in each step when a leaf node is reached, its value is returned good correspondence to human decision-making

Page 65: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Boolean Decision TreesBoolean Decision Trees

compute yes/no decisions based on sets of desirable or undesirable properties of an object or a situation

each node in the tree reflects one yes/no decision based on a test of the value of one property of the object

the root node is the starting point

leaf nodes represent the possible final decisions

branches are labeled with possible values

the learning aspect is to predict the value of a goal predicate (also called goal concept)

a hypothesis is formulated as a function that defines the goal predicate

Page 66: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

TerminologyTerminology

example or sample describes the values of the attributes and the goal

a positive sample has the value true for the goal predicate, a negative sample false

sample set collection of samples used for training and validation

training the training set consists of samples used for constructing the

decision tree

validation the test set is used to determine if the decision tree performs

correctly

ideally, the test set is different from the training set

Page 67: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Restaurant Sample SetRestaurant Sample Set

Example Attributes GoalExampleAlt Bar Fri Hun Pat Price Rain Res Type Est WillWait

X1 Yes No No Yes Some $$$ No Yes French 0-10 Yes X1X2 Yes No No Yes Full $ No No Thai 30-60 No X2X3 No Yes No No Some $ No No Burger 0-10 Yes X3X4 Yes No Yes Yes Full $ No No Thai 10-30 Yes X4X5 Yes No Yes No Full $$$ No Yes French >60 No X5X6 No Yes No Yes Some $$ Yes Yes Italian 0-10 Yes X6X7 No Yes No No None $ Yes No Burger 0-10 No X7X8 No No No Yes Some $$ Yes Yes Thai 0-10 Yes X8X9 No Yes Yes No Full $ Yes No Burger >60 No X9

X10 Yes Yes Yes Yes Full $$$ No Yes Italian 10-30 No X10X11 No No No No None $ No No Thai 0-10 No X11X12 Yes Yes Yes Yes Full $ No No Burger30-60 Yes X12

Page 68: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Decision Tree ExampleDecision Tree Example

No

No

Yes

Patrons?

Bar?

EstWait?

Hungry?

None

Som

e Full

Yes0-10

10-30 30-60 > 60

Yes Alternative? No Alternative?

NoYes YesNo

Walkable?Yes

No

Yes

Yes

No

No

Yes

Driveable?Yes

No

Yes

Yes

No

No

Yes

To wait, or not to wait?

Page 69: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Expressiveness of Decision Expressiveness of Decision TreesTrees

decision trees can also be expressed in logic as implication sentences

in principle, they can express propositional logic sentences

each row in the truth table of a sentence can be represented as a path in the tree

often there are more efficient trees

some functions require exponentially large decision trees

parity function, majority function

Page 70: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning Decision TreesLearning Decision Trees

problem: find a decision tree that agrees with the training set

trivial solution: construct a tree with one branch for each sample of the training set

works perfectly for the samples in the training set

may not work well for new samples (generalization)

results in relatively large trees

better solution: find a concise tree that still agrees with all samples

corresponds to the simplest hypothesis that is consistent with the training set

Page 71: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

OckhamOckham’’s Razors Razor

The most likely hypothesis is the simplest one that is consistent with all observations.

general principle for inductive learning

a simple hypothesis that is consistent with all observations is more likely to be correct than a complex one

Page 72: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Constructing Decision TreesConstructing Decision Trees

in general, constructing the smallest possible decision tree is an intractable problem

algorithms exist for constructing reasonably small trees

basic idea: test the most important attribute first attribute that makes the most difference for the

classification of an example

can be determined through information theory

hopefully will yield the correct classification with few tests

Page 73: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Decision Tree AlgorithmDecision Tree Algorithm

recursive formulationrecursive formulation select the best attribute to split positive and negative

examples

if only positive or only negative examples are left, we are done

if no examples are left, no such examples were observed

return a default value calculated from the majority classification at the node’s parent

if we have positive and negative examples left, but no attributes to split them, we are in trouble

samples have the same description, but different classifications

may be caused by incorrect data (noise), or by a lack of information, or by a truly non-deterministic domain

Page 74: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Restaurant Sample SetRestaurant Sample Set

Example Attributes Goal ExampleAlt Bar Fri Hun Pat Price Rain Res Type Est WillWait

X1 Yes No No Yes Some $$$ No Yes French 0-10 Yes X1X2 Yes No No Yes Full $ No No Thai 30-60 No X2X3 No Yes No No Some $ No No Burger 0-10 Yes X3X4 Yes No Yes Yes Full $ No No Thai 10-30 Yes X4X5 Yes No Yes No Full $$$ No Yes French >60 No X5X6 No Yes No Yes Some $$ Yes Yes Italian 0-10 Yes X6X7 No Yes No No None $ Yes No Burger 0-10 No X7X8 No No No Yes Some $$ Yes Yes Thai 0-10 Yes X8X9 No Yes Yes No Full $ Yes No Burger >60 No X9

X10 Yes Yes Yes Yes Full $$$ No Yes Italian10-30 No X10X11 No No No No None $ No No Thai 0-10 No X11X12 Yes Yes Yes Yes Full $ No No Burger30-60 Yes X12

Page 75: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Restaurant Sample SetRestaurant Sample Set

select best attribute candidate 1: Pat Some and None in agreement with

goal

candidate 2: Type No values in agreement with goal

Example Attributes Goal ExampleAlt Bar Fri Hun Pat Price Rain Res Type Est WillWait

X1 Yes No No Yes Some $$$ No Yes French 0-10 Yes X1X2 Yes No No Yes Full $ No No Thai 30-60 No X2X3 No Yes No No Some $ No No Burger 0-10 Yes X3X4 Yes No Yes Yes Full $ No No Thai 10-30 Yes X4X5 Yes No Yes No Full $$$ No Yes French >60 No X5X6 No Yes No Yes Some $$ Yes Yes Italian 0-10 Yes X6X7 No Yes No No None $ Yes No Burger 0-10 No X7X8 No No No Yes Some $$ Yes Yes Thai 0-10 Yes X8X9 No Yes Yes No Full $ Yes No Burger >60 No X9

X10 Yes Yes Yes Yes Full $$$ No Yes Italian10-30 No X10X11 No No No No None $ No No Thai 0-10 No X11X12 Yes Yes Yes Yes Full $ No No Burger30-60 Yes X12

Page 76: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Partial Decision TreePartial Decision Tree

Patrons needs further discrimination only for the Full value

None and Some agree with the WillWait goal predicate

the next step will be performed on the remaining samples for the Full value of Patrons

Patrons?None Full

Some

X1, X3, X4, X6, X8, X12

X2, X5, X7, X9, X10, X11

X7, X11

X2, X5, X9, X10

X1, X3, X6, X8

X4, X12

YesNo

Page 77: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Restaurant Sample SetRestaurant Sample Set

select next best attribute candidate 1: Hungry No in agreement with goal

candidate 2: Type No values in agreement with goal

Example Attributes Goal ExampleAlt Bar Fri Hun Pat Price Rain Res Type Est WillWait

X1 Yes No No Yes Some $$$ No Yes French 0-10 Yes X1X2 Yes No No Yes Full $ No No Thai 30-60 No X2X3 No Yes No No Some $ No No Burger 0-10 Yes X3X4 Yes No Yes Yes Full $ No No Thai 10-30 Yes X4X5 Yes No Yes No Full $$$ No Yes French >60 No X5X6 No Yes No Yes Some $$ Yes Yes Italian 0-10 Yes X6X7 No Yes No No None $ Yes No Burger 0-10 No X7X8 No No No Yes Some $$ Yes Yes Thai 0-10 Yes X8X9 No Yes Yes No Full $ Yes No Burger >60 No X9

X10 Yes Yes Yes Yes Full $$$ No Yes Italian10-30 No X10X11 No No No No None $ No No Thai 0-10 No X11X12 Yes Yes Yes Yes Full $ No No Burger30-60 Yes X12

Page 78: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Partial Decision TreePartial Decision Tree

Hungry needs further discrimination only for the Yes value

No agrees with the WillWait goal predicate

the next step will be performed on the remaining samples for the Yes value of Hungry

Patrons?None Full

Some

X1, X3, X4, X6, X8, X12

X2, X5, X7, X9, X10, X11

X7, X11

X2, X5, X9, X10

X1, X3, X6, X8

X4, X12

Yes No Hungry?N

X5, X9X4, X12

X2, X10

Y

No

Page 79: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Restaurant Sample SetRestaurant Sample Set

select next best attribute candidate 1: Type Italian, Burger in agreement with goal

candidate 2: Friday No in agreement with goal

Example Attributes Goal ExampleAlt Bar Fri Hun Pat Price Rain Res Type Est WillWait

X1 Yes No No Yes Some $$$ No Yes French 0-10 Yes X1X2 Yes No No Yes Full $ No No Thai 30-60 No X2X3 No Yes No No Some $ No No Burger 0-10 Yes X3X4 Yes No Yes Yes Full $ No No Thai 10-30 Yes X4X5 Yes No Yes No Full $$$ No Yes French >60 No X5X6 No Yes No Yes Some $$ Yes Yes Italian 0-10 Yes X6X7 No Yes No No None $ Yes No Burger 0-10 No X7X8 No No No Yes Some $$ Yes Yes Thai 0-10 Yes X8X9 No Yes Yes No Full $ Yes No Burger >60 No X9

X10 Yes Yes Yes Yes Full $$$ No Yes Italian10-30 No X10X11 No No No No None $ No No Thai 0-10 No X11X12 Yes Yes Yes Yes Full $ No No Burger30-60 Yes X12

Page 80: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Partial Decision TreePartial Decision Tree

Hungry needs further discrimination only for the Yes value

No agrees with the WillWait goal predicate

the next step will be performed on the remaining samples for the Yes value of Hungry

Patrons?None Full

Some

X1, X3, X4, X6, X8, X12

X2, X5, X7, X9, X10, X11

X7, X11

X2, X5, X9, X10

X1, X3, X6, X8

X4, X12

Yes No Hungry?N

X5, X9X4, X12

X2, X10

Y

NoType?

Ital.

YesNo Yes

BurgerFrench

X10 X12X4

X2Th

ai

Page 81: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Restaurant Sample SetRestaurant Sample Set

select next best attribute candidate 1: Friday Yes and No in agreement with goal

Example Attributes Goal ExampleAlt Bar Fri Hun Pat Price Rain Res Type Est WillWait

X1 Yes No No Yes Some $$$ No Yes French 0-10 Yes X1X2 Yes No No Yes Full $ No No Thai 30-60 No X2X3 No Yes No No Some $ No No Burger 0-10 Yes X3X4 Yes No Yes Yes Full $ No No Thai 10-30 Yes X4X5 Yes No Yes No Full $$$ No Yes French >60 No X5X6 No Yes No Yes Some $$ Yes Yes Italian 0-10 Yes X6X7 No Yes No No None $ Yes No Burger 0-10 No X7X8 No No No Yes Some $$ Yes Yes Thai 0-10 Yes X8X9 No Yes Yes No Full $ Yes No Burger >60 No X9

X10 Yes Yes Yes Yes Full $$$ No Yes Italian10-30 No X10X11 No No No No None $ No No Thai 0-10 No X11X12 Yes Yes Yes Yes Full $ No No Burger30-60 Yes X12

Page 82: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

© 2000-2012 Franz Kurfess Learning

Decision TreeDecision Tree the two remaining

samples can be made consistent by selecting Friday as the next predicate

no more samples left

Patrons?None Full

Some

X1, X3, X4, X6, X8, X12

X2, X5, X7, X9, X10, X11

X7, X11

X2, X5, X9, X10

X1, X3, X6, X8

X4, X12

Yes No Hungry?N

X5, X9X4, X12

X2, X10

Y

NoType?Ital.

Yes

No Yes

BurgerFrench

X10 X12X4

X2

Thai

Friday?Y

X4 X2N

Yes No

Page 83: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Performance of DT LearningPerformance of DT Learning

quality of predictions predictions for the classification of unknown examples

that agree with the correct result are obviously better

can be measured easily after the fact

it can be assessed in advance by splitting the available examples into a training set and a test set

learn the training set, and assess the performance via the test set

size of the tree a smaller tree (especially depth-wise) is a more concise

representation

Page 84: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Noise and Over-fittingNoise and Over-fitting

the presence of irrelevant attributes (“noise”) may lead to more degrees of freedom in the decision tree

the hypothesis space is unnecessarily large

overfitting makes use of irrelevant attributes to distinguish between samples that have no meaningful differences

e.g. using the day of the week when rolling dice over-fitting is a general problem for all learning algorithms

decision tree pruning identifies attributes that are likely to be irrelevant

cross-validation splits the sample data in different training and test sets

results are averaged

Page 85: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Ensemble LearningEnsemble Learning

multiple hypotheses (an ensemble) are generated, and their predictions combined

by using multiple hypotheses, the likelihood for misclassification is hopefully lower

also enlarges the hypothesis space

boosting is a frequently used ensemble method each example in the training set has a weight associated

the weights of incorrectly classified examples are increased, and a new hypothesis is generated from this new weighted training set

the final hypothesis is a weighted-majority combination of all the generated hypotheses

Page 86: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Computational Learning Computational Learning TheoryTheory

relies on methods and techniques from theoretical computer science, statistics, and AI

used for the formal analysis of learning algorithms

basic principles a hypothesis is seriously wrong

it will most likely generate a false prediction even for small numbers of examples

hypothesis is consistent with a large number of examples

most likely it is quite good, or probably approximately correct

Page 87: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Probably Approximately Probably Approximately Correct (PAC) LearningCorrect (PAC) Learning

approximately correct hypothesis its error lies within a small constant of the true result

by testing a sufficient number of examples, one can see if a hypothesis has a high probability of being approximately correct

stationary assumption training and test sets follow the same probability

distribution

there is a connection between the past (known) and the future (unknown)

a selection of non-representative examples will not result in good learning

Page 88: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Learning in Neural NetworksLearning in Neural Networks

Neurons and the Brain

Neural Networks

Perceptrons

Multi-layer Networks

Applications

Page 89: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Important Concepts and Important Concepts and TermsTerms

machine learning multi-layer neural network neural network neuron noise Ockham’s razor perceptron performance element prior knowledge sample synapse test set training set transparency

axon back-propagation learning

algorithm bias decision tree dendrite feedback function approximation generalization gradient descent hypothesis inductive learning learning element linear separability

Page 90: Learning Learning from Observations. Learning Course Overview  Introduction  Intelligent Agents  Search problem solving through search informed search

Learning

Chapter SummaryChapter Summary

learning is very important for agents to improve their decision-making process

unknown environments, changes, time constraints

most methods rely on inductive learning a function is approximated from sample input-output pairs

decision trees are useful for learning deterministic Boolean functions

neural networks consist of simple interconnected computational elements

multi-layer feed-forward networks can learn any function provided they have enough units and time to learn