cs 188: artificial intelligence spring 2007

30
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley

Upload: merrill

Post on 07-Jan-2016

31 views

Category:

Documents


0 download

DESCRIPTION

CS 188: Artificial Intelligence Spring 2007. Lecture 23: Reinforcement Learning: III 4/17/2007. Srini Narayanan – ICSI and UC Berkeley. Announcements. Othello tournament rules up. On-line readings for this week. Value Iteration. Idea: - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS 188: Artificial Intelligence Spring 2007

CS 188: Artificial IntelligenceSpring 2007

Lecture 23: Reinforcement Learning: III4/17/2007

Srini Narayanan – ICSI and UC Berkeley

Page 2: CS 188: Artificial Intelligence Spring 2007

Announcements

Othello tournament rules up. On-line readings for this week.

Page 3: CS 188: Artificial Intelligence Spring 2007

Value Iteration

Idea: Start with bad guesses at all utility values (e.g. V0(s) = 0) Update all values simultaneously using the Bellman equation

(called a value update or Bellman update):

Repeat until convergence

Theorem: will converge to unique optimal values Basic idea: bad guesses get refined towards optimal values Policy may converge long before values do

Page 4: CS 188: Artificial Intelligence Spring 2007

Example: Bellman Updates

Page 5: CS 188: Artificial Intelligence Spring 2007

Example: Value Iteration

Information propagates outward from terminal states and eventually all states have correct value estimates

[DEMO]

Page 6: CS 188: Artificial Intelligence Spring 2007

Policy Iteration

Alternate approach: Policy evaluation: calculate utilities for a fixed policy

until convergence (remember last lecture) Policy improvement: update policy based on resulting

converged utilities Repeat until policy converges

This is policy iteration Can converge faster under some conditions

Page 7: CS 188: Artificial Intelligence Spring 2007

Policy Iteration

If we have a fixed policy , use simplified Bellman equation to calculate utilities:

For fixed utilities, easy to find the best action according to one-step look-ahead

Page 8: CS 188: Artificial Intelligence Spring 2007

Comparison

In value iteration: Every pass (or “backup”) updates both utilities (explicitly, based

on current utilities) and policy (possibly implicitly, based on current policy)

In policy iteration: Several passes to update utilities with frozen policy Occasional passes to update policies

Hybrid approaches (asynchronous policy iteration): Any sequences of partial updates to either policy entries or

utilities will converge if every state is visited infinitely often

Page 9: CS 188: Artificial Intelligence Spring 2007

Reinforcement Learning

Reinforcement learning: Still have an MDP:

A set of states s S A set of actions (per state) A A model T(s,a,s’) A reward function R(s,a,s’)

Still looking for a policy (s)

New twist: don’t know T or R I.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn

[DEMO]

Page 10: CS 188: Artificial Intelligence Spring 2007

Example: Animal Learning

RL studied experimentally for more than 60 years in psychology Rewards: food, pain, hunger, drugs, etc. Mechanisms and sophistication debated

Example: foraging Bees learn near-optimal foraging plan in field of

artificial flowers with controlled nectar supplies Bees have a direct neural connection from nectar

intake measurement to motor planning area

Page 11: CS 188: Artificial Intelligence Spring 2007

Passive Learning

Simplified task You don’t know the transitions T(s,a,s’) You don’t know the rewards R(s,a,s’) You are given a policy (s) Goal: learn the state values (and maybe the model)

In this case: No choice about what actions to take Just execute the policy and learn from experience We’ll get to the general case soon

Page 12: CS 188: Artificial Intelligence Spring 2007

Example: Direct EstimationSimple Monte Carlo

Episodes:

x

y

(1,1) up -1

(1,2) up -1

(1,2) up -1

(1,3) right -1

(2,3) right -1

(3,3) right -1

(3,2) up -1

(3,3) right -1

(4,3) exit +100

(done)

(1,1) up -1

(1,2) up -1

(1,3) right -1

(2,3) right -1

(3,3) right -1

(3,2) up -1

(4,2) exit -100

(done)U(1,1) ~ (92 + -106) / 2 = -7

U(3,3) ~ (99 + 97 + -102) / 3 = 31.3

= 1, R = -1

+100

-100

Page 13: CS 188: Artificial Intelligence Spring 2007

Model-Based Learning

Idea: Learn the model empirically (rather than values) Solve the MDP as if the learned model were correct

Empirical model learning Simplest case:

Count outcomes for each s,a Normalize to give estimate of T(s,a,s’) Discover R(s,a,s’) the first time we experience (s,a,s’)

More complex learners are possible (e.g. if we know that all squares have related action outcomes, e.g. “stationary noise”)

Page 14: CS 188: Artificial Intelligence Spring 2007

Example: Model-Based Learning

Episodes:

x

y

T(<3,3>, right, <4,3>) = 1 / 3

T(<2,3>, right, <3,3>) = 2 / 2

+100

-100

= 1

(1,1) up -1

(1,2) up -1

(1,2) up -1

(1,3) right -1

(2,3) right -1

(3,3) right -1

(3,2) up -1

(3,3) right -1

(4,3) exit +100

(done)

(1,1) up -1

(1,2) up -1

(1,3) right -1

(2,3) right -1

(3,3) right -1

(3,2) up -1

(4,2) exit -100

(done)

Page 15: CS 188: Artificial Intelligence Spring 2007

Model-Based Learning

In general, want to learn the optimal policy, not evaluate a fixed policy

Idea: adaptive dynamic programming Learn an initial model of the environment: Solve for the optimal policy for this model (value or

policy iteration) Refine model through experience and repeat Crucial: we have to make sure we actually learn about

all of the model

Page 16: CS 188: Artificial Intelligence Spring 2007

Example: Greedy ADP

Imagine we find the lower path to the good exit first

Some states will never be visited following this policy from (1,1)

We’ll keep re-using this policy because following it never collects the regions of the model we need to learn the optimal policy

? ?

Page 17: CS 188: Artificial Intelligence Spring 2007

What Went Wrong?

Problem with following optimal policy for current model: Never learn about better regions of

the space if current policy neglects them

Fundamental tradeoff: exploration vs. exploitation Exploration: must take actions with

suboptimal estimates to discover new rewards and increase eventual utility

Exploitation: once the true optimal policy is learned, exploration reduces utility

Systems must explore in the beginning and exploit in the limit

? ?

Page 18: CS 188: Artificial Intelligence Spring 2007

Dynamic Programming )()( 11 ttt sVrEsV

T

T T T

st

rt1

st1

T

TT

T

TT

T

T

T

Page 19: CS 188: Artificial Intelligence Spring 2007

Simple Monte Carlo

T T T TT

T T T T T

V(st ) V(st) Rt V (st ) where Rt is the actual return following state st .

st

T T

T T

TT T

T TT

Page 20: CS 188: Artificial Intelligence Spring 2007

Simplest TD Method

T T T TT

T T T T T

st1

rt1

st

V(st ) V(st) rt1 V (st1 ) V(st )

TTTTT

T T T T T

Page 21: CS 188: Artificial Intelligence Spring 2007

Model-Free Learning

Big idea: why bother learning T? Update each time we experience a transition Frequent outcomes will contribute more updates

(over time)

Temporal difference learning (TD) Policy still fixed! Move values toward value of whatever

successor occurs

a

s

s, a

s,a,s’s’

Page 22: CS 188: Artificial Intelligence Spring 2007

TD Learning features

On-line Incremental Bootstrapping (like DP unlike MC) Model free Does not require special treatment of exploration

actions (unlike DP) Converges for any policy to the correct value of

a state for that policy. As long as

Alpha is high in the beginning and low at the end (say 1/k) Gamma less than one

Page 23: CS 188: Artificial Intelligence Spring 2007

Problems with TD Value Learning

TD value learning is model-free for policy evaluation However, if we want to turn

our value estimates into a policy, we’re sunk:

Idea: Learn state-action pairings (Q-values) directly

Makes action selection model-free too!

a

s

s, a

s,a,s’s’

Page 24: CS 188: Artificial Intelligence Spring 2007

Q-Functions

To simplify things, introduce a q-value, for a state and action under a policy Utility of taking starting in state s, taking

action a, then following thereafter

Page 25: CS 188: Artificial Intelligence Spring 2007

The Bellman Equations

Definition of utility leads to a simple relationship amongst optimal utility values:

Optimal rewards = maximize over first action and then follow optimal policy

Formally:

Page 26: CS 188: Artificial Intelligence Spring 2007

Q-Learning

Learn Q*(s,a) values Receive a sample (s,a,s’,r) Consider your old estimate: Consider your new sample estimate:

Nudge the old estimate towards the new sample:

Page 27: CS 188: Artificial Intelligence Spring 2007

Q-Learning

Page 28: CS 188: Artificial Intelligence Spring 2007

Exploration / Exploitation

Several schemes for forcing exploration Simplest: random actions (-greedy)

Every time step, flip a coin With probability , act randomly With probability 1-, act according to current policy

Problems with random actions? You do explore the space, but keep thrashing

around once learning is done One solution: lower over time Another solution: exploration functions

Page 29: CS 188: Artificial Intelligence Spring 2007

Demo of Q Learning

Demo Robot crawling Parameters

k (learning rate) discounted reward (high for future rewards) exploration(should decrease with time)

MDP number of pixel moved to the right/ iteration number Actions : Arm up and down (yellow line), hand up and

down (red line)

Page 30: CS 188: Artificial Intelligence Spring 2007

Q-Learning Properties

Will converge to optimal policy If you explore enough If you make the learning rate small enough

Neat property: does not learn policies which are optimal in the presence of action selection noise

S E S E