tutorial: deep reinforcement learning · outline introduction to deep learning introduction to...

66
Tutorial: Deep Reinforcement Learning David Silver, Google DeepMind

Upload: others

Post on 06-Sep-2019

27 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Tutorial: Deep Reinforcement Learning

David Silver, Google DeepMind

Page 2: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Outline

Introduction to Deep Learning

Introduction to Reinforcement Learning

Value-Based Deep RL

Policy-Based Deep RL

Model-Based Deep RL

Page 3: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Reinforcement Learning in a nutshell

RL is a general-purpose framework for decision-making

I RL is for an agent with the capacity to act

I Each action influences the agent’s future state

I Success is measured by a scalar reward signal

I Goal: select actions to maximise future reward

Page 4: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Learning in a nutshell

DL is a general-purpose framework for representation learning

I Given an objective

I Learn representation that is required to achieve objective

I Directly from raw inputs

I Using minimal domain knowledge

Page 5: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Reinforcement Learning: AI = RL + DL

We seek a single agent which can solve any human-level task

I RL defines the objective

I DL gives the mechanism

I RL + DL = general intelligence

Page 6: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Examples of Deep RL @DeepMind

I Play games: Atari, poker, Go, ...

I Explore worlds: 3D worlds, Labyrinth, ...

I Control physical systems: manipulate, walk, swim, ...

I Interact with users: recommend, optimise, personalise, ...

Page 7: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Outline

Introduction to Deep Learning

Introduction to Reinforcement Learning

Value-Based Deep RL

Policy-Based Deep RL

Model-Based Deep RL

Page 8: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Representations

I A deep representation is a composition of many functions

x

//h1

// ... //h

n

//y

//l

w1

OO

... wn

OO

I Its gradient can be backpropagated by the chain rule

@l@x

@l@h1

@h1@xoo

@h1@w1✏✏

...

@h2@h1oo @l

@hn

@hn

@hn�1oo

@hn

@wn

✏✏

@l@y

@y@h

noo

@l@w1

... @l@wn

Page 9: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Neural Network

A deep neural network is typically composed of:

I Linear transformations

h

k+1 = Wh

k

I Non-linear activation functions

h

k+2 = f (hk+1)

I A loss function on the output, e.g.I Mean-squared error l = ||y⇤ � y ||2I Log likelihood l = logP [y⇤]

Page 10: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Training Neural Networks by Stochastic Gradient Descent

I Sample gradient of expected loss L(w) = E [l ]

@l

@w⇠ E

@l

@w

�=

@L(w)

@w

I Adjust w down the sampled gradient

�w / @l

@w

!"#$%"&'(%")*+,#'-'!"#$%%" (%")*+,#'.+/0+,#

� !"#$%&'('%$&#()&*+$,*$#&&-&$$$$."'%"$'*$%-/0'*,('-*$.'("$("#$1)*%('-*$,22&-3'/,(-& %,*$0#$)+#4$(-$%&#,(#$,*$#&&-&$1)*%('-*$$$$$$$$$$$$$

� !"#$2,&(',5$4'11#&#*(',5$-1$("'+$#&&-&$1)*%('-*$$$$$$$$$$$$$$$$6$("#$7&,4'#*($%,*$*-.$0#$)+#4$(-$)24,(#$("#$'*(#&*,5$8,&',05#+$'*$("#$1)*%('-*$,22&-3'/,(-& 9,*4$%&'('%:;$$$$$$

<&,4'#*($4#+%#*($=>?

Page 11: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Weight SharingRecurrent neural network shares weights between time-steps

y

t

y

t+1

... //h

t

//

OO

h

t+1//

OO

...

w

??

x

t

OO

w

==

x

t+1

OO

Convolutional neural network shares weights between local regions

w1

w1

w2

w2

x

h1

h2

Page 12: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Outline

Introduction to Deep Learning

Introduction to Reinforcement Learning

Value-Based Deep RL

Policy-Based Deep RL

Model-Based Deep RL

Page 13: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Many Faces of Reinforcement Learning

Computer Science

Economics

Mathematics

Engineering Neuroscience

Psychology

Machine Learning

Classical/OperantConditioning

Optimal Control

RewardSystem

Operations Research

Rationality/Game Theory

Reinforcement Learning

Page 14: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Agent and Environment

observation

reward

action

at

rt

ot

I At each step t the agent:I Executes action a

t

I Receives observation o

t

I Receives scalar reward r

t

I The environment:I Receives action a

t

I Emits observation o

t+1I Emits scalar reward r

t+1

Page 15: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

State

I Experience is a sequence of observations, actions, rewards

o1, r1, a1, ..., at�1, ot , rt

I The state is a summary of experience

s

t

= f (o1, r1, a1, ..., at�1, ot , rt)

I In a fully observed environment

s

t

= f (ot

)

Page 16: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Major Components of an RL Agent

I An RL agent may include one or more of these components:I Policy: agent’s behaviour functionI Value function: how good is each state and/or actionI Model: agent’s representation of the environment

Page 17: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Policy

I A policy is the agent’s behaviourI It is a map from state to action:

I Deterministic policy: a = ⇡(s)I Stochastic policy: ⇡(a|s) = P [a|s]

Page 18: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Value Function

I A value function is a prediction of future rewardI “How much reward will I get from action a in state s?”

IQ-value function gives expected total reward

I from state s and action a

I under policy ⇡I with discount factor �

Q

⇡(s, a) = E⇥r

t+1 + �rt+2 + �2

r

t+3 + ... | s, a⇤

I Value functions decompose into a Bellman equation

Q

⇡(s, a) = Es

0,a0⇥r + �Q⇡(s 0, a0) | s, a

Page 19: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Value Function

I A value function is a prediction of future rewardI “How much reward will I get from action a in state s?”

IQ-value function gives expected total reward

I from state s and action a

I under policy ⇡I with discount factor �

Q

⇡(s, a) = E⇥r

t+1 + �rt+2 + �2

r

t+3 + ... | s, a⇤

I Value functions decompose into a Bellman equation

Q

⇡(s, a) = Es

0,a0⇥r + �Q⇡(s 0, a0) | s, a

Page 20: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Optimal Value Functions

I An optimal value function is the maximum achievable value

Q

⇤(s, a) = max⇡

Q

⇡(s, a) = Q

⇡⇤(s, a)

I Once we have Q

⇤ we can act optimally,

⇡⇤(s) = argmaxa

Q

⇤(s, a)

I Optimal value maximises over all decisions. Informally:

Q

⇤(s, a) = r

t+1 + � maxa

t+1r

t+2 + �2 maxa

t+2r

t+3 + ...

= r

t+1 + � maxa

t+1Q

⇤(st+1, at+1)

I Formally, optimal values decompose into a Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

Page 21: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Optimal Value Functions

I An optimal value function is the maximum achievable value

Q

⇤(s, a) = max⇡

Q

⇡(s, a) = Q

⇡⇤(s, a)

I Once we have Q

⇤ we can act optimally,

⇡⇤(s) = argmaxa

Q

⇤(s, a)

I Optimal value maximises over all decisions. Informally:

Q

⇤(s, a) = r

t+1 + � maxa

t+1r

t+2 + �2 maxa

t+2r

t+3 + ...

= r

t+1 + � maxa

t+1Q

⇤(st+1, at+1)

I Formally, optimal values decompose into a Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

Page 22: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Optimal Value Functions

I An optimal value function is the maximum achievable value

Q

⇤(s, a) = max⇡

Q

⇡(s, a) = Q

⇡⇤(s, a)

I Once we have Q

⇤ we can act optimally,

⇡⇤(s) = argmaxa

Q

⇤(s, a)

I Optimal value maximises over all decisions. Informally:

Q

⇤(s, a) = r

t+1 + � maxa

t+1r

t+2 + �2 maxa

t+2r

t+3 + ...

= r

t+1 + � maxa

t+1Q

⇤(st+1, at+1)

I Formally, optimal values decompose into a Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

Page 23: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Optimal Value Functions

I An optimal value function is the maximum achievable value

Q

⇤(s, a) = max⇡

Q

⇡(s, a) = Q

⇡⇤(s, a)

I Once we have Q

⇤ we can act optimally,

⇡⇤(s) = argmaxa

Q

⇤(s, a)

I Optimal value maximises over all decisions. Informally:

Q

⇤(s, a) = r

t+1 + � maxa

t+1r

t+2 + �2 maxa

t+2r

t+3 + ...

= r

t+1 + � maxa

t+1Q

⇤(st+1, at+1)

I Formally, optimal values decompose into a Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

Page 24: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Value Function Demo

Page 25: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Model

observation

reward

action

at

rt

ot

Page 26: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Model

observation

reward

action

at

rt

ot I Model is learnt from experience

I Acts as proxy for environment

I Planner interacts with model

I e.g. using lookahead search

Page 27: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Approaches To Reinforcement Learning

Value-based RL

I Estimate the optimal value function Q

⇤(s, a)

I This is the maximum value achievable under any policy

Policy-based RL

I Search directly for the optimal policy ⇡⇤

I This is the policy achieving maximum future reward

Model-based RL

I Build a model of the environment

I Plan (e.g. by lookahead) using model

Page 28: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Reinforcement Learning

I Use deep neural networks to representI Value functionI PolicyI Model

I Optimise loss function by stochastic gradient descent

Page 29: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Outline

Introduction to Deep Learning

Introduction to Reinforcement Learning

Value-Based Deep RL

Policy-Based Deep RL

Model-Based Deep RL

Page 30: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Q-Networks

Represent value function by Q-network with weights w

Q(s, a,w) ⇡ Q

⇤(s, a)

s sa

Q(s,a,w) Q(s,a1,w) Q(s,am,w)…

w w

Page 31: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Q-Learning

I Optimal Q-values should obey Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

I Treat right-hand side r + � maxa

0Q(s 0, a0,w) as a target

I Minimise MSE loss by stochastic gradient descent

l =

✓r + � max

a

0Q(s 0, a0,w) � Q(s, a,w)

◆2

I Converges to Q

⇤ using table lookup representationI But diverges using neural networks due to:

I Correlations between samplesI Non-stationary targets

Page 32: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Q-Learning

I Optimal Q-values should obey Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

I Treat right-hand side r + � maxa

0Q(s 0, a0,w) as a target

I Minimise MSE loss by stochastic gradient descent

l =

✓r + � max

a

0Q(s 0, a0,w) � Q(s, a,w)

◆2

I Converges to Q

⇤ using table lookup representation

I But diverges using neural networks due to:I Correlations between samplesI Non-stationary targets

Page 33: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Q-Learning

I Optimal Q-values should obey Bellman equation

Q

⇤(s, a) = Es

0

r + � max

a

0Q

⇤(s 0, a0) | s, a�

I Treat right-hand side r + � maxa

0Q(s 0, a0,w) as a target

I Minimise MSE loss by stochastic gradient descent

l =

✓r + � max

a

0Q(s 0, a0,w) � Q(s, a,w)

◆2

I Converges to Q

⇤ using table lookup representationI But diverges using neural networks due to:

I Correlations between samplesI Non-stationary targets

Page 34: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Q-Networks (DQN): Experience Replay

To remove correlations, build data-set from agent’s own experience

s1, a1, r2, s2s2, a2, r3, s3 ! s, a, r , s 0

s3, a3, r4, s4...

s

t

, at

, rt+1, st+1 ! s

t

, at

, rt+1, st+1

Sample experiences from data-set and apply update

l =

✓r + � max

a

0Q(s 0, a0,w�) � Q(s, a,w)

◆2

To deal with non-stationarity, target parameters w� are held fixed

Page 35: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Reinforcement Learning in Atari

state

reward

action

at

rt

st

Page 36: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

DQN in Atari

I End-to-end learning of values Q(s, a) from pixels s

I Input state s is stack of raw pixels from last 4 frames

I Output is Q(s, a) for 18 joystick/button positions

I Reward is change in score for that step

Network architecture and hyperparameters fixed across all games

Page 37: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

DQN Results in Atari

Page 38: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

DQN Atari Demo

DQN paperwww.nature.com/articles/nature14236

DQN source code:sites.google.com/a/deepmind.com/dqn/

NATURE.COM/NATURE26 February 2015 £10

Vol. 518, No. 7540

EPIDEMIOLOGY

SHARE DATA IN OUTBREAKS

Forge open access to sequences and more

PAGE 477

COSMOLOGY

A GIANT IN THE EARLY UNIVERSE

A supermassive black hole at a redshift of 6.3

PAGES 490 & 512

QUANTUM PHYSICS

TELEPORTATION FOR TWO

Transferring two properties of a single photon

PAGES 491 & 516

INNOVATIONS INThe microbiome

Self-taught AI software attains human-level

performance in video games PAGES 486 & 529

T H E I N T E R N AT I O N A L W E E K LY J O U R N A L O F S C I E N C E

Page 39: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Improvements since Nature DQN

I Double DQN: Remove upward bias caused by maxa

Q(s, a,w)

I Current Q-network w is used to select actionsI Older Q-network w� is used to evaluate actions

l =

✓r + �Q(s 0, argmax

a

0Q(s 0, a0,w),w�) � Q(s, a,w)

◆2

I Prioritised replay: Weight experience according to surpriseI Store experience in priority queue according to DQN error

���r + � maxa

0Q(s 0, a0,w�) � Q(s, a,w)

���

I Duelling network: Split Q-network into two channelsI Action-independent value function V (s, v)I Action-dependent advantage function A(s, a,w)

Q(s, a) = V (s, v) + A(s, a,w)

Combined algorithm: 3x mean Atari score vs Nature DQN

Page 40: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Improvements since Nature DQN

I Double DQN: Remove upward bias caused by maxa

Q(s, a,w)

I Current Q-network w is used to select actionsI Older Q-network w� is used to evaluate actions

l =

✓r + �Q(s 0, argmax

a

0Q(s 0, a0,w),w�) � Q(s, a,w)

◆2

I Prioritised replay: Weight experience according to surpriseI Store experience in priority queue according to DQN error

���r + � maxa

0Q(s 0, a0,w�) � Q(s, a,w)

���

I Duelling network: Split Q-network into two channelsI Action-independent value function V (s, v)I Action-dependent advantage function A(s, a,w)

Q(s, a) = V (s, v) + A(s, a,w)

Combined algorithm: 3x mean Atari score vs Nature DQN

Page 41: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Improvements since Nature DQN

I Double DQN: Remove upward bias caused by maxa

Q(s, a,w)

I Current Q-network w is used to select actionsI Older Q-network w� is used to evaluate actions

l =

✓r + �Q(s 0, argmax

a

0Q(s 0, a0,w),w�) � Q(s, a,w)

◆2

I Prioritised replay: Weight experience according to surpriseI Store experience in priority queue according to DQN error

���r + � maxa

0Q(s 0, a0,w�) � Q(s, a,w)

���

I Duelling network: Split Q-network into two channelsI Action-independent value function V (s, v)I Action-dependent advantage function A(s, a,w)

Q(s, a) = V (s, v) + A(s, a,w)

Combined algorithm: 3x mean Atari score vs Nature DQN

Page 42: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Improvements since Nature DQN

I Double DQN: Remove upward bias caused by maxa

Q(s, a,w)

I Current Q-network w is used to select actionsI Older Q-network w� is used to evaluate actions

l =

✓r + �Q(s 0, argmax

a

0Q(s 0, a0,w),w�) � Q(s, a,w)

◆2

I Prioritised replay: Weight experience according to surpriseI Store experience in priority queue according to DQN error

���r + � maxa

0Q(s 0, a0,w�) � Q(s, a,w)

���

I Duelling network: Split Q-network into two channelsI Action-independent value function V (s, v)I Action-dependent advantage function A(s, a,w)

Q(s, a) = V (s, v) + A(s, a,w)

Combined algorithm: 3x mean Atari score vs Nature DQN

Page 43: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Gorila (General Reinforcement Learning Architecture)

I 10x faster than Nature DQN on 38 out of 49 Atari games

I Applied to recommender systems within Google

Page 44: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Asynchronous Reinforcement Learning

I Exploits multithreading of standard CPU

I Execute many instances of agent in parallel

I Network parameters shared between threadsI Parallelism decorrelates data

I Viable alternative to experience replay

I Similar speedup to Gorila - on a single machine!

Page 45: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Outline

Introduction to Deep Learning

Introduction to Reinforcement Learning

Value-Based Deep RL

Policy-Based Deep RL

Model-Based Deep RL

Page 46: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Policy Networks

I Represent policy by deep network with weights u

a = ⇡(a|s,u) or a = ⇡(s,u)

I Define objective function as total discounted reward

L(u) = E⇥r1 + �r2 + �2

r3 + ... | ⇡(·,u)⇤

I Optimise objective end-to-end by SGD

I i.e. Adjust policy parameters u to achieve more reward

Page 47: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Policy Gradients

How to make high-value actions more likely:

I The gradient of a stochastic policy ⇡(a|s,u) is given by

@L(u)@u

= E@log ⇡(a|s,u)

@uQ

⇡(s, a)

I The gradient of a deterministic policy a = ⇡(s) is given by

@L(u)@u

= E@Q⇡(s, a)

@a

@a

@u

I if a is continuous and Q is di↵erentiable

Page 48: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Policy Gradients

How to make high-value actions more likely:

I The gradient of a stochastic policy ⇡(a|s,u) is given by

@L(u)@u

= E@log ⇡(a|s,u)

@uQ

⇡(s, a)

I The gradient of a deterministic policy a = ⇡(s) is given by

@L(u)@u

= E@Q⇡(s, a)

@a

@a

@u

I if a is continuous and Q is di↵erentiable

Page 49: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Actor-Critic Algorithm

I Estimate value function Q(s, a,w) ⇡ Q

⇡(s, a)

I Update policy parameters u by stochastic gradient ascent

@l

@u=

@log ⇡(a|s,u)@u

Q(s, a,w)

or

@l

@u=

@Q(s, a,w)

@a

@a

@u

Page 50: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Asynchronous Advantage Actor-Critic (A3C)I Estimate state-value function

V (s, v) ⇡ E [rt+1 + �r

t+2 + ...|s]I Q-value estimated by an n-step sample

q

t

= r

t+1 + �rt+2... + �n�1

r

t+n

+ �n

V (st+n

, v)

I Actor is updated towards target

@lu

@u=

@log ⇡(at

|st

,u)@u

(qt

� V (st

, v))

I Critic is updated to minimise MSE w.r.t. target

l

v

= (qt

� V (st

, v))2

I 4x mean Atari score vs Nature DQN

Page 51: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Asynchronous Advantage Actor-Critic (A3C)I Estimate state-value function

V (s, v) ⇡ E [rt+1 + �r

t+2 + ...|s]I Q-value estimated by an n-step sample

q

t

= r

t+1 + �rt+2... + �n�1

r

t+n

+ �n

V (st+n

, v)

I Actor is updated towards target

@lu

@u=

@log ⇡(at

|st

,u)@u

(qt

� V (st

, v))

I Critic is updated to minimise MSE w.r.t. target

l

v

= (qt

� V (st

, v))2

I 4x mean Atari score vs Nature DQN

Page 52: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Asynchronous Advantage Actor-Critic (A3C)I Estimate state-value function

V (s, v) ⇡ E [rt+1 + �r

t+2 + ...|s]I Q-value estimated by an n-step sample

q

t

= r

t+1 + �rt+2... + �n�1

r

t+n

+ �n

V (st+n

, v)

I Actor is updated towards target

@lu

@u=

@log ⇡(at

|st

,u)@u

(qt

� V (st

, v))

I Critic is updated to minimise MSE w.r.t. target

l

v

= (qt

� V (st

, v))2

I 4x mean Atari score vs Nature DQN

Page 53: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Reinforcement Learning in Labyrinth

Page 54: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

A3C in Labyrinth

Deep Reinforcement Learning in LabyrinthDeep Reinforcement Learning in Labyrinth

Deep Reinforcement Learning in Labyrinthst st+1st-1

ot-1 ot ot+1

π(a|st-1) π(a|st) π(a|st+1)V(st-1) V(st) V(st-1)

I End-to-end learning of softmax policy ⇡(a|st

) from pixels

I Observations ot

are raw pixels from current frame

I State s

t

= f (o1, ..., ot) is a recurrent neural network (LSTM)

I Outputs both value V (s) and softmax over actions ⇡(a|s)I Task is to collect apples (+1 reward) and escape (+10 reward)

Page 55: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

A3C Labyrinth Demo

Demo:www.youtube.com/watch?v=nMR5mjCFZCw&feature=youtu.be

Labyrinth source code (coming soon):sites.google.com/a/deepmind.com/labyrinth/

Page 56: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Reinforcement Learning with Continuous Actions

How can we deal with high-dimensional continuous action spaces?

I Can’t easily compute maxa

Q(s, a)

I Actor-critic algorithms learn without max

I Q-values are di↵erentiable w.r.t aI Deterministic policy gradients exploit knowledge of @Q

@a

Page 57: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep DPG

DPG is the continuous analogue of DQN

I Experience replay: build data-set from agent’s experience

I Critic estimates value of current policy by DQN

l

w

=

✓r + �Q(s 0, ⇡(s 0, u�),w�) � Q(s, a,w)

◆2

To deal with non-stationarity, targets u�,w� are held fixed

I Actor updates policy in direction that improves Q

@lu

@u=

@Q(s, a,w)

@a

@a

@u

I In other words critic provides loss function for actor

Page 58: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

DPG in Simulated PhysicsI Physics domains are simulated in MuJoCoI End-to-end learning of control policy from raw pixels sI Input state s is stack of raw pixels from last 4 framesI Two separate convnets are used for Q and ⇡I Policy ⇡ is adjusted in direction that most improves Q

Q(s,a)

π(s)

a

Page 59: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

DPG in Simulated Physics Demo

I Demo: DPG from pixels

Page 60: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

A3C in Simulated Physics Demo

I Asynchronous RL is viable alternative to experience replay

I Train a hierarchical, recurrent locomotion controller

I Retrain controller on more challenging tasks

Page 61: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Fictitious Self-Play (FSP)

Can deep RL find Nash equilibria in multi-agent games?

I Q-network learns “best response” to opponent policies

I By applying DQN with experience replayI c.f. fictitious play

I Policy network ⇡(a|s,u) learns an average of best responses

@l

@u=

@log ⇡(a|s,u)@u

I Actions a sample mix of policy network and best response

Page 62: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Neural FSP in Texas Hold’em PokerI Heads-up limit Texas Hold’emI NFSP with raw inputs only (no prior knowledge of Poker)I vs SmooCT (3x medal winner 2015, handcrafted knowlege)

-800

-700

-600

-500

-400

-300

-200

-100

0

100

0 5e+06 1e+07 1.5e+07 2e+07 2.5e+07 3e+07 3.5e+07

mbb

/h

Iterations

SmooCTNFSP, best response strategy

NFSP, greedy-average strategyNFSP, average strategy

(a) Win rates against SmooCT. The estimated stan-dard error of each evaluation is less than 10 mbb/h.

Match-up NFSPescabeche -52.1 ± 8.5SmooCT -17.4 ± 9.0Hyperborean -13.6 ± 9.2

(b) Win rates of NFSP’s greedy-average strategyagainst the top 3 agents of the ACPC 2014.

Figure 2: Performance of NFSP in Limit Texas Hold’em.

1024, 512, 1024 and 512 neurons with rectified linear activations. The memory sizes were set to255

600k and 30m for MRL and MSL respectively. MRL functioned as a circular buffer containing a256

recent window of experience. MSL was updated with exponentially-averaged reservoir sampling257

(Osborne et al., 2014), replacing entries in MSL with minimum probability 0.25. We used vanilla258

SGD without momentum for both reinforcement and supervised learning, with learning rates set to259

0.1 and 0.01 respectively. Each agent performed 2 stochastic gradient updates of mini-batch size 256260

per network for every 256 steps in the game. The target network was refitted every 1000 updates.261

NFSP’s anticipatory parameter was set to � = 0.1. The �-greedy policies’ exploration started at 0.08262

and decayed to 0, more slowly than in Leduc Hold’em. In addition to NFSP’s main, average strategy263

profile we also evaluated the best response and greedy-average strategies, which deterministically264

choose actions that maximize the predicted action values or probabilities respectively.265

To provide some intuition for win rates in heads-up LHE, a player that always folds will lose 750266

mbb/h, and expert human players typically achieve expected win rates of 40-60 mbb/h at online267

high-stakes games. Similarly, the top half of computer agents in the ACPC 2014 achieved up to 50268

mbb/h between themselves. While training, we periodically evaluated NFSP’s performance against269

SmooCT from symmetric play for 25000 hands each. Figure 2a presents the learning performance of270

NFSP. NFSP’s average and greedy-average strategy profiles exhibit a stable and relatively monotonic271

performance improvement, and achieve win rates of around -50 and -20 mbb/h respectively. The272

best response strategy profile exhibited more noisy performance, mostly ranging between -50 and 0273

mbb/h. We also evaluated the final greedy-average strategy against the other top 3 competitors of274

the ACPC 2014. Table 2b presents the results. NFSP achieves winrates similar to those of the top275

half of computer agents in the ACPC 2014 and thus is competitive with superhuman compute poker276

programs.277

5 Related work278

Reliance on human expert knowledge can be expensive, prone to human biases and limiting if such279

knowledge is suboptimal. Yet many methods that have been applied to games have relied on human280

expert knowledge. Deep Blue used a human-engineered evaluation function for chess (Campbell et al.,281

2002). In computer Go, Maddison et al. (2015) and Clark and Storkey (2015) trained deep neural282

networks from data of expert human play. In computer poker, current game-theoretic approaches283

use heuristics of card strength to abstract the game to a tractable size (Zinkevich et al., 2007; Gilpin284

et al., 2007; Johanson et al., 2013). Waugh et al. (2015) recently combined one of these methods285

with function approximation. However, their full-width algorithm has to implicitly reason about all286

information states at each iteration, which is prohibitively expensive in large domains. In contrast,287

NFSP focuses on the sample-based reinforcement learning setting where the game’s states need not288

be exhaustively enumerated and the learner may not even have a model of the game’s dynamics.289

Nash equilibria are the only strategy profiles that rational agents can hope to converge on in self-290

play (Bowling and Veloso, 2001). TD-Gammon (Tesauro, 1995) is a world-class backgammon291

7

Page 63: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Outline

Introduction to Deep Learning

Introduction to Reinforcement Learning

Value-Based Deep RL

Policy-Based Deep RL

Model-Based Deep RL

Page 64: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Learning Models of the Environment

I Demo: generative model of AtariI Challenging to plan due to compounding errors

I Errors in the transition model compound over the trajectoryI Planning trajectories di↵er from executed trajectoriesI At end of long, unusual trajectory, rewards are totally wrong

Page 65: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Deep Reinforcement Learning in Go

What if we have a perfect model? e.g. game rules are known

AlphaGo paper:www.nature.com/articles/nature16961

AlphaGo resources:deepmind.com/alphago/

NATURE.COM/NATURE28 January 2016 £10

Vol. 529, No. 7587

CONSERVATION

SONGBIRDSÀ LA CARTE

Illegal harvest of millionsof Mediterranean birds

PAGE 452

RESEARCH ETHICS

SAFEGUARD TRANSPARENCY Don’t let openness backfire

on individualsPAGE 459

POPULAR SCIENCE

WHEN GENES GOT ‘SELFISH’

Dawkins’s callingcard forty years on

PAGE 462

ALL SYSTEMS GOAt last — a computer program that

can beat a champion Go player PAGE 484

T H E I N T E R N AT I O N A L W E E K LY J O U R N A L O F S C I E N C E

Cover 28 January 2016.indd 1 20/01/2016 15:40

Page 66: Tutorial: Deep Reinforcement Learning · Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

Conclusion

I General, stable and scalable RL is now possible

I Using deep networks to represent value, policy, model

I Successful in Atari, Labyrinth, Physics, Poker, Go

I Using a variety of deep RL paradigms