an object-oriented representation for efficient reinforcement learning

13
An Object-oriented Representation for Efficient Reinforcement Learning Carlos Diuk, Andre Cohen and Michael L. Littman Rutgers Laboratory for Real-Life Reinforcement Learning (RL) 3 Department of Computer Science Rutgers University (New Jersey, USA) ICML 2008 – Helsinki, Finland

Upload: birch

Post on 24-Jan-2016

32 views

Category:

Documents


0 download

DESCRIPTION

An Object-oriented Representation for Efficient Reinforcement Learning. Carlos Diuk, Andre Cohen and Michael L. Littman Rutgers Laboratory for Real-Life Reinforcement Learning (RL) 3 Department of Computer Science Rutgers University (New Jersey, USA). ICML 2008 – Helsinki, Finland. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: An Object-oriented Representation for Efficient Reinforcement Learning

An Object-oriented Representation for Efficient Reinforcement Learning

Carlos Diuk, Andre Cohen and Michael L. Littman

Rutgers Laboratory for Real-Life Reinforcement Learning (RL)3

Department of Computer ScienceRutgers University (New Jersey, USA)

ICML 2008 – Helsinki, Finland

Page 2: An Object-oriented Representation for Efficient Reinforcement Learning

Motivation

How would YOU play this game?

Page 3: An Object-oriented Representation for Efficient Reinforcement Learning

What’s in a state?

A simple hash code that tells you if you’ve been “there” before.

What we (the agent) can actually “see”: objects, interactions, spatial relationships.

s1 -> a0 -> s5

s5 -> a2 -> s24

s24 -> a1 -> s1

If we know that our agents are interacting in a spatial relation with objects, let’s just tell them so.

Page 4: An Object-oriented Representation for Efficient Reinforcement Learning

What we did

• Grab ideas from Relational RL and come up with a representation that:– is suitable for a wide-enough range of domains– is tractable– provides opportunities for generalization– enables smart exploration

• Strike a balance between generality and tractability.

Page 5: An Object-oriented Representation for Efficient Reinforcement Learning

OO representation• Problem defined by a set of objects and their

attributes. • Example: Objects in Pitfall defined by a

bounding box on a set of pixels based on color.

Man.<x,y>

Hole.<x,y>

Ladder.<x,y> Wall.<x,y>

Log.<x,y>

• State is the union of all objects’ attribute values.

Page 6: An Object-oriented Representation for Efficient Reinforcement Learning

OO representation• For any given state s, there is a function c(s)

that tells us which relations occur under s.

• Dynamics defined by preconditions and effects.

• Preconditions are conjunctions of terms:– Relations between objects:

• touchN/S/E/W(objecti, objectj)• on(objecti, objectj)

– Any (boolean) function on the attributes.– Any other function encoding prior knowledge.

• Actions have effects that determine how objects’ attributes get modified.

on(Man, Ladder)

Man.y = Man.y + 8

Action Up

Page 7: An Object-oriented Representation for Efficient Reinforcement Learning

DOORMax• An algorithm for efficient learning of

deterministic OO-MDPs.

• When objects interact, and an effect is observed, DOORMax learns the conjunction of terms that enabled the effect.

• Belongs to the R-Max family of algorithms:– Guides exploration to make objects interact

Page 8: An Object-oriented Representation for Efficient Reinforcement Learning

Pitfall video

Page 9: An Object-oriented Representation for Efficient Reinforcement Learning

DOORMax Analysis• Let n be the number of terms.• Assume that:

– The number of effects per action is bounded by a (small) constant m.

– Each effect has a unique conjunctive condition.

• As long as effects are observed (that is, some effect occurs given an action a), DOORMax will learn the condition-effect pairs that determine the dynamics of a in O(nm). There is a worst-case bound, when lots of no-effects are observed, of O(nm).

Page 10: An Object-oriented Representation for Efficient Reinforcement Learning

Results

What about this game?

Videogame

Page 11: An Object-oriented Representation for Efficient Reinforcement Learning

Representations in Taxi

# of steps Time per step What we have to tell it

Q-learning 47157 <1ms #states, #actions

MaxQ 6298 10ms Task hierarchy, DBNs for each task

Flat Rmax 4151 ~40ms #states, #actions

Factored Rmax 1676 44ms DBN

DSHP 319 11ms Task hierarchy, DBNs for each task

DOORMax 529 14ms Object representation

Page 12: An Object-oriented Representation for Efficient Reinforcement Learning

Bigger Taxi

Taxi 5x5 Taxi 10x10 Ratio

# States 500 7200 14.40

Factored – Rmax 1676 steps 19100 steps 11.39

DOORmax 529 steps 821 steps 1.55

DOORmax with Transfer from 5x5

529 steps 529 steps 1

Page 13: An Object-oriented Representation for Efficient Reinforcement Learning

Conclusions and future work• OO-MDPs provide a natural way of modeling an interesting set

of domains, while enabling generalization and smart exploration.

• DOORMax learns deterministic OO-MDPs outperforming state-of-the-art algorithms for factored-state representations.

• DOORMax scales very nicely with respect to the size of the state space, as long as transition dynamics between objects do not change.

• We do not have a provably efficient algorithm for stochastic OO-MDPs.

• We do not yet handle inheritance between classes of objects.