cs 541: artificial intelligence

53
CS 541: Artificial Intelligence Lecture X: Markov Decision Process Slides Credit: Peter Norvig and Sebastian Thrun

Upload: arthur-richards

Post on 03-Jan-2016

17 views

Category:

Documents


0 download

DESCRIPTION

CS 541: Artificial Intelligence. Lecture X: Markov Decision Process. Slides Credit: Peter Norvig and Sebastian Thrun. Schedule. Re-cap of Lecture IX. Machine Learning Classification (Naïve Bayes ) Regression (Linear, Smoothing) Linear Separation ( Perceptron , SVMs) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS 541: Artificial Intelligence

CS 541: Artificial Intelligence

Lecture X: Markov Decision Process

Slides Credit: Peter Norvig and Sebastian Thrun

Page 2: CS 541: Artificial Intelligence

ScheduleWeek Date Topic Reading Homework**

1 08/29/2012 Introduction & Intelligent Agent Ch 1 & 2 N/A2 09/05/2012 Search strategy and heuristic search Ch 3 & 4s HW1 (Search)3 09/12/2012 Constraint Satisfaction & Adversarial Search Ch 4s & 5 &

6 Teaming Due

4 09/19/2012 Logic: Logic Agent & First Order Logic Ch 7 & 8s HW1 due, Midterm Project  (Game)

5 09/26/2012 Logic: Inference on First Order Logic Ch 8s & 9  6 10/03/2012 No class     7 10/10/2012 Uncertainty and Bayesian Network  Ch 13 &

Ch14s HW2 (Logic)

8 10/17/2012 Midterm Presentation   Midterm Project Due

9 10/24/2012 Inference in Baysian Network Ch 14s HW2 Due,

10 10/31/2012 HW3 (PR & ML) 

11 11/07/2012 Machine Learning  Ch 18 & 20  

12 11/14/2012 Probabilistic Reasoning over Time Ch 15

 13 11/21/2012 No class HW3 due (11/19/2012) 14 11/28/2012 Markov Decision Process Ch 16 HW4 due15 12/05/2012 Reinforcement learning Ch 21 16 12/12/2012 Final Project Presentation Final Project Due

Page 3: CS 541: Artificial Intelligence

Re-cap of Lecture IX Machine Learning Classification (Naïve Bayes) Regression (Linear, Smoothing) Linear Separation (Perceptron, SVMs) Non-parametric classification (KNN)

Page 4: CS 541: Artificial Intelligence

Outline Planning under uncertainty Planning with Markov Decision Processes Utilities and Value Iteration Partially Observable Markov Decision Processes

Page 5: CS 541: Artificial Intelligence

Planning under uncertainty

Page 6: CS 541: Artificial Intelligence

6

Rhino Museum Tourguide

http://www.youtube.com/watch?v=PF2qZ-O3yAM

Page 7: CS 541: Artificial Intelligence

7

Minerva Robot

http://www.cs.cmu.edu/~rll/resources/cmubg.mpg

Page 8: CS 541: Artificial Intelligence

8

Pearl Robot

http://www.cs.cmu.edu/~thrun/movies/pearl-assist.mpg

Page 9: CS 541: Artificial Intelligence

9

Mine Mapping Robot Groundhog

http://www-2.cs.cmu.edu/~thrun/3D/mines/groundhog/anim/mine_mapping.mpg

Page 10: CS 541: Artificial Intelligence

10

Page 11: CS 541: Artificial Intelligence

11

PlanningUncertainty

Page 12: CS 541: Artificial Intelligence

Planning: Classical Situation

hellheaven

• World deterministic• State observable

Page 13: CS 541: Artificial Intelligence

MDP-Style Planning

hellheaven

• World stochastic• State observable

[Koditschek 87, Barto et al. 89]

• Policy• Universal Plan• Navigation function

Page 14: CS 541: Artificial Intelligence

Stochastic, Partially Observable

sign

hell?heaven?

[Sondik 72] [Littman/Cassandra/Kaelbling 97]

Page 15: CS 541: Artificial Intelligence

Stochastic, Partially Observable

sign

hellheaven

sign

heavenhell

Page 16: CS 541: Artificial Intelligence

Stochastic, Partially Observable

sign

heavenhell

sign

??

sign

hellheaven

start

50% 50%

Page 17: CS 541: Artificial Intelligence

Stochastic, Partially Observable

sign

??

start

sign

heavenhell

sign

hellheaven

50% 50%

sign

??

start

Page 18: CS 541: Artificial Intelligence

A Quiz

-dim continuousstochastic1-dimcontinuous

stochastic

actions# states size belief space?sensors

3: s1, s2, s3deterministic

3 perfect

3: s1, s2, s3stochastic3 perfect

23-1: s1, s2, s3, s12, s13, s23, s123deterministic

3 abstract states

deterministic

3 stochastic

2-dim continuous: p(S=s1), p(S=s2)stochastic3 none

2-dim continuous: p(S=s1), p(S=s2)

-dim continuousdeterministic

1-dimcontinuous

stochastic

aargh!stochastic-dimcontinuous

stochastic

Page 19: CS 541: Artificial Intelligence

Planning with Markov Decision Processes

Page 20: CS 541: Artificial Intelligence

MPD Planning Solution for Planning problem

Noisy controls Perfect perception Generates “universal plan” (=policy)

Page 21: CS 541: Artificial Intelligence

Grid World

The agent lives in a grid Walls block the agent’s path The agent’s actions do not

always go as planned: 80% of the time, the action

North takes the agent North (if there is no wall there)

10% of the time, North takes the agent West; 10% East

If there is a wall in the direction the agent would have been taken, the agent stays put

Small “living” reward each step Big rewards come at the end Goal: maximize sum of

rewards*

Page 22: CS 541: Artificial Intelligence

Grid Futures

22

Deterministic Grid World

Stochastic Grid World

X

X

E N S W

X

E N S W

?

X

X X

Page 23: CS 541: Artificial Intelligence

Markov Decision Processes An MDP is defined by:

A set of states s S A set of actions a A A transition function T(s,a,s’)

Prob that a from s leads to s’ i.e., P(s’ | s,a) Also called the model

A reward function R(s, a, s’) Sometimes just R(s) or R(s’)

A start state (or distribution) Maybe a terminal state

MDPs are a family of non-deterministic search problems Reinforcement learning: MDPs where we

don’t know the transition or reward functions

23

Page 24: CS 541: Artificial Intelligence

Solving MDPs

In deterministic single-agent search problems, want an optimal plan, or sequence of actions, from start to a goal

In an MDP, we want an optimal policy *: S → A A policy gives an action for each state An optimal policy maximizes expected utility if followed Defines a reflex agent

Optimal policy when R(s, a, s’) = -0.03 for all non-terminals s

Page 25: CS 541: Artificial Intelligence

Example Optimal Policies

R(s) = -2.0R(s) = -0.4

R(s) = -0.03R(s) = -0.01

Page 26: CS 541: Artificial Intelligence

MDP Search Trees

Each MDP state gives a search tree

a

s

s’

s, a

(s,a,s’) called a transition

T(s,a,s’) = P(s’|s,a)

R(s,a,s’)

s,a,s’

s is a state

(s, a) is a q-state

26

Page 27: CS 541: Artificial Intelligence

Why Not Search Trees?

Why not solve with conventional planning?

Problems: This tree is usually infinite (why?) Same states appear over and over

(why?) We would search once per state (why?)

27

Page 28: CS 541: Artificial Intelligence

Utilities

Page 29: CS 541: Artificial Intelligence

Utilities

Utility = sum of future reward Problem: infinite state sequences have infinite

rewards Solutions:

Finite horizon: Terminate episodes after a fixed T steps (e.g. life) Gives nonstationary policies ( depends on time left)

Absorbing state: guarantee that for every policy, a terminal state will eventually be reached (like “done” for High-Low)

Discounting: for 0 < < 1

Smaller means smaller “horizon” – shorter term focus29

Page 30: CS 541: Artificial Intelligence

Discounting Typically discount

rewards by < 1 each time step Sooner rewards have

higher utility than later rewards

Also helps the algorithms converge

30

Page 31: CS 541: Artificial Intelligence

Recap: Defining MDPs Markov decision processes:

States S Start state s0 Actions A Transitions P(s’|s,a) (or T(s,a,s’)) Rewards R(s,a,s’) (and discount )

MDP quantities so far: Policy = Choice of action for each state Utility (or return) = sum of discounted rewards

a

s

s, a

s,a,s’s’

31

Page 32: CS 541: Artificial Intelligence

Optimal Utilities Fundamental operation:

compute the values (optimal expect-max utilities) of states s

Why? Optimal values define optimal policies!

Define the value of a state s:V*(s) = expected utility starting in

s and acting optimally

Define the value of a q-state (s,a):Q*(s,a) = expected utility starting

in s, taking action a and thereafter acting optimally

Define the optimal policy:*(s) = optimal action from state s

a

s

s, a

s,a,s’s’

32

Page 33: CS 541: Artificial Intelligence

The Bellman Equations

Definition of “optimal utility” leads to a simple one-step lookahead relationship amongst optimal utility values:

Optimal rewards = maximize over first action and then follow optimal policy

Formally:

a

s

s, a

s,a,s’s’

33

Page 34: CS 541: Artificial Intelligence

Solving MDPs We want to find the optimal policy *

Proposal 1: modified expect-max search, starting from each state s:

a

s

s, a

s,a,s’s’

34

Page 35: CS 541: Artificial Intelligence

Value Iteration Idea:

Start with V0*(s) = 0

Given Vi*, calculate the values for all states for depth i+1:

This is called a value update or Bellman update Repeat until convergence

Theorem: will converge to unique optimal values Basic idea: approximations get refined towards optimal

values Policy may converge long before values do

35

Page 36: CS 541: Artificial Intelligence

Example: Bellman Updates

36

max happens for a=right, other actions not shown

Example: =0.9, living reward=0, noise=0.2

Page 37: CS 541: Artificial Intelligence

Example: Value Iteration

Information propagates outward from terminal states and eventually all states have correct value estimates

V2 V3

37

Page 38: CS 541: Artificial Intelligence

Computing Actions Which action should we chose from state s:

Given optimal values V?

Given optimal q-values Q?

Lesson: actions are easier to select from Q’s!

Page 39: CS 541: Artificial Intelligence

Asynchronous Value Iteration* In value iteration, we update every state in each

iteration

Actually, any sequences of Bellman updates will converge if every state is visited infinitely often

In fact, we can update the policy as seldom or often as we like, and we will still converge

Idea: Update states whose value we expect to change:

If is large then update predecessors of s

Page 40: CS 541: Artificial Intelligence

Value Iteration: Example

Page 41: CS 541: Artificial Intelligence

Another Example

Value Function and PlanMap

Page 42: CS 541: Artificial Intelligence

Another Example

Value Function and PlanMap

Page 43: CS 541: Artificial Intelligence

Partially Observable Markov Decision Processes

Page 44: CS 541: Artificial Intelligence

Stochastic, Partially Observable

sign

??

start

sign

heavenhell

sign

hellheaven

50% 50%

sign

??

start

Page 45: CS 541: Artificial Intelligence

45

Value Iteration in Belief space: POMDPs Partially Observable Markov Decision Process

Known model (learning even harder!) Observation uncertainty Usually also: transition uncertainty Planning in belief space = space of all probability

distributions

Value function: Piecewise linear, convex function over the belief space

Page 46: CS 541: Artificial Intelligence

Introduction to POMDPs (1 of 3)

80-100

ba

100

ba

-40

s2s1

action a

action b

p(s1)

[Sondik 72, Littman, Kaelbling, Cassandra ‘97]

s2s1

-100

0

100

action aaction b

Page 47: CS 541: Artificial Intelligence

Introduction to POMDPs (2 of 3)

80-100

ba

100

ba

-40

s2s1

80%c

20%

p(s1)s2

s1’

s1

s2’

p(s1’)

p(s1)s2s1

-100

0

100

[Sondik 72, Littman, Kaelbling, Cassandra ‘97]

Page 48: CS 541: Artificial Intelligence

Introduction to POMDPs (3 of 3)

80-100

ba

100

ba

-40

s2s1

80%c

20%

p(s1)s2s1

-100

0

100

p(s1)s2

s1

s1

s2

p(s1’|A)

B

A50%

50%

30%

70%B

A

p(s1’|B))())|(())((

},{11 zpzspVspV

BAz

[Sondik 72, Littman, Kaelbling, Cassandra ‘97]

Page 49: CS 541: Artificial Intelligence

49

POMDP Algorithm Belief space = Space of all probability

distribution (continuous) Value function: Max of set of linear functions

in belief space Backup: Create new linear functions

Number of linear functions can grow fast!

Page 50: CS 541: Artificial Intelligence

Why is This So Complex?

State Space Planning(no state uncertainty)

Belief Space Planning(full state uncertainties)

?

Page 51: CS 541: Artificial Intelligence

but not usually.

The controller may be globally uncertain...

Belief Space Structure

Page 52: CS 541: Artificial Intelligence

Augmented MDPs:

s

sHsbb ][);(argmax

[Roy et al, 98/99]

conventional state space

uncertainty (entropy)

Page 53: CS 541: Artificial Intelligence

Path Planning with Augmented MDPs

information gainConventional planner Probabilistic Planner

[Roy et al, 98/99]