pondering probabilistic play policies for pig

Post on 05-Jan-2016

24 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Pondering Probabilistic Play Policies for Pig. Todd W. Neller Gettysburg College. Sow What’s This All About?. The Dice Game “Pig” Odds and Ends Playing to Win “Piglet” Value Iteration Machine Learning. Pig: The Game. Object: First to score 100 points On your turn, roll until: - PowerPoint PPT Presentation

TRANSCRIPT

Pondering Probabilistic Pondering Probabilistic Play Policies for PigPlay Policies for Pig

Todd W. NellerGettysburg College

Sow What’s This All Sow What’s This All About?About?

•The Dice Game “Pig”•Odds and Ends•Playing to Win–“Piglet”–Value Iteration

•Machine Learning

Pig: The GamePig: The Game

•Object: First to score 100 points

•On your turn, roll until:–You roll 1, and score NOTHING.–You hold, and KEEP the sum.

•Simple game simple strategy?

•Let’s play…

Playing to ScorePlaying to Score

•Simple odds argument–Roll until you risk more than you stand to gain.

–“Hold at 20”•1/6 of time: -20 -20/6•5/6 of time: +4 (avg. of 2,3,4,5,6) +20/6

Hold at 20?Hold at 20?

•Is there a situation in which you wouldn’t want to hold at 20?–Your score: 99; you roll 2–Case scenario •you: 79 opponent:99•Your turn total stands at 20

What’s Wrong With What’s Wrong With Playing to Score?Playing to Score?

•It’s mathematically optimal!•But what are we optimizing?•Playing to score Playing to win

•Optimizing score per turn Optimizing probability of a win

PigletPiglet

•Simpler version of Pig with a coin

•Object: First to score 10 points•On your turn, flip until:–You flip tails, and score NOTHING.–You hold, and KEEP the # of heads.

•Even simpler: play to 2 points

Essential InformationEssential Information

•What is the information I need to make a fully informed decision?–My score–The opponent’s score–My “turn score”

A Little NotationA Little Notation

•Pi,j,k – probability of a win ifi = my scorej = the opponent’s scorek = my turn score

•Hold: Pi,j,k = 1 - Pj,i+k,0

•Flip: Pi,j,k = ½(1 - Pj,i,0) + ½ Pi,j,k+1

Assume RationalityAssume Rationality

•To make a smart player, assume a smart opponent.

• (To make a smarter player, know your opponent.)

• Pi,j,k = max(1 - Pj,i+k,0, ½(1 - Pj,i,0 + Pi,j,k+1))

•Probability of win based on best decisions in any state

The Whole StoryThe Whole Story

P0,0,0 = max(1 – P0,0,0, ½(1 – P0,0,0 + P0,0,1))P0,0,1 = max(1 – P0,1,0, ½(1 – P0,0,0 + P0,0,2))P0,1,0 = max(1 – P1,0,0, ½(1 – P1,0,0 + P0,1,1))P0,1,1 = max(1 – P1,1,0, ½(1 – P1,0,0 + P0,1,2))P1,0,0 = max(1 – P0,1,0, ½(1 – P0,1,0 + P1,0,1))P1,1,0 = max(1 – P1,1,0, ½(1 – P1,1,0 + P1,1,1))

The Whole StoryThe Whole Story

P0,0,0 = max(1 – P0,0,0, ½(1 – P0,0,0 + P0,0,1))P0,0,1 = max(1 – P0,1,0, ½(1 – P0,0,0 + P0,0,2))P0,1,0 = max(1 – P1,0,0, ½(1 – P1,0,0 + P0,1,1))P0,1,1 = max(1 – P1,1,0, ½(1 – P1,0,0 + P0,1,2))P1,0,0 = max(1 – P0,1,0, ½(1 – P0,1,0 + P1,0,1))P1,1,0 = max(1 – P1,1,0, ½(1 – P1,1,0 + P1,1,1))

These are winning states!

The Whole StoryThe Whole Story

P0,0,0 = max(1 – P0,0,0, ½(1 – P0,0,0 + P0,0,1))

P0,0,1 = max(1 – P0,1,0, ½(1 – P0,0,0 + 1))P0,1,0 = max(1 – P1,0,0, ½(1 – P1,0,0 +

P0,1,1))P0,1,1 = max(1 – P1,1,0, ½(1 – P1,0,0 + 1))P1,0,0 = max(1 – P0,1,0, ½(1 – P0,1,0 + 1))P1,1,0 = max(1 – P1,1,0, ½(1 – P1,1,0 + 1))Simplified…

The Whole StoryThe Whole Story

P0,0,0 = max(1 – P0,0,0, ½(1 – P0,0,0 + P0,0,1))P0,0,1 = max(1 – P0,1,0, ½(2 – P0,0,0))P0,1,0 = max(1 – P1,0,0, ½(1 – P1,0,0 + P0,1,1))P0,1,1 = max(1 – P1,1,0, ½(2 – P1,0,0))P1,0,0 = max(1 – P0,1,0, ½(2 – P0,1,0))P1,1,0 = max(1 – P1,1,0, ½(2 – P1,1,0))And simplified more into a hamsome set

of equations…

How to Solve It?How to Solve It?

P0,0,0 = max(1 – P0,0,0, ½(1 – P0,0,0 + P0,0,1))P0,0,1 = max(1 – P0,1,0, ½(2 – P0,0,0))P0,1,0 = max(1 – P1,0,0, ½(1 – P1,0,0 + P0,1,1))P0,1,1 = max(1 – P1,1,0, ½(2 – P1,0,0))P1,0,0 = max(1 – P0,1,0, ½(2 – P0,1,0))P1,1,0 = max(1 – P1,1,0, ½(2 – P1,1,0))P0,1,0 depends on P0,1,1 depends on P1,0,0 depends on P0,1,0

depends on …

A System of A System of PigquationsPigquations

P0,1,1

P0,0,1P1,1,0P0,1,0

P1,0,0P0,0,0

Dependencies betweennon-winning states

How Bad Is It?How Bad Is It?

•The intersection of a set of bent hyperplanes in a hypercube

• In the general case, no known method (read: PhD research)

• Is there a method that works (without being guaranteed to work in general)? –Yes! Value Iteration!

Value IterationValue Iteration

•Start out with some values (0’s, 1’s, random #’s)

•Do the following until the values converge (stop changing):–Plug the values into the RHS’s–Recompute the LHS values

•That’s easy. Let’s do it!

Value IterationValue Iteration

P0,0,0 = max(1 – P0,0,0, ½(1 – P0,0,0 + P0,0,1))P0,0,1 = max(1 – P0,1,0, ½(2 – P0,0,0))P0,1,0 = max(1 – P1,0,0, ½(1 – P1,0,0 + P0,1,1))P0,1,1 = max(1 – P1,1,0, ½(2 – P1,0,0))P1,0,0 = max(1 – P0,1,0, ½(2 – P0,1,0))P1,1,0 = max(1 – P1,1,0, ½(2 – P1,1,0))• Assume Pi,j,k is 0 unless it’s a win• Repeat: Compute RHS’s, assign to

LHS’s

But That’s GRUNT But That’s GRUNT Work!Work!

•So have a computer do it, slacker!

•Not difficult – end of CS1 level•Fast! Don’t blink – you’ll miss it•Optimal play:–Compute the probabilities–Determine flip/hold from RHS

max’s– (For our equations, always FLIP)

Piglet SolvedPiglet Solved

•Game to 10•Play to Score: “Hold at 1”

•Play to Win:

0

2

4

6

8

10

Hold Value

Little Pig Hold Values

You

Opponent0 1 2 3 4 5 6 7 8 9

0 2 2 2 2 2 2 2 2 3 101 1 2 2 2 2 2 2 3 3 92 1 1 2 2 2 2 2 2 3 83 1 1 1 2 2 2 2 2 3 74 1 1 1 1 2 2 2 2 2 65 1 1 1 1 1 2 2 2 2 56 1 1 1 1 1 1 2 2 2 47 1 1 1 1 1 1 1 1 3 38 1 1 1 1 1 1 1 2 2 29 1 1 1 1 1 1 1 1 1 1

Pig ProbabilitiesPig Probabilities

•Just like Piglet, but more possible outcomes

•Pi,j,k = max(1 - Pj,i+k,0, 1/6(1 - Pj,i,0 + Pi,j,k+2 + Pi,j,k+3

+ Pi,j,k+4 + Pi,j,k+5 + Pi,j,k+6))

Solving PigSolving Pig

•505,000 such equations•Same simple solution method (value iteration)

•Speedup: Solve groups of interdependent probabilities

•Watch and see!

Pig Sow-lutionPig Sow-lution

Pig Sow-lutionPig Sow-lution

Reachable StatesReachable States

Player 2 Score (j) = 30

Reachable StatesReachable States

Sow-lution forSow-lution forReachable StatesReachable States

Probability ContoursProbability Contours

SummarySummary

•Playing to score is not playing to win.

•A simple game is not always simple to play.

•The computer is an exciting power tool for the mind!

When Value IterationWhen Value IterationIsn’t EnoughIsn’t Enough

•Value Iteration assumes a model of the problem–Probabilities of state transitions– Expected rewards for transitions

•Loaded die?•Optimal play vs. suboptimal

player? •Game rules unknown?

No Model? Then No Model? Then LearnLearn!!

•Can’t write equations can’t solve

•Must learn from experience!•Reinforcement Learning– Learn optimal sequences of

actions– From experience–Given positive/negative feedback

Clever Mabel the CatClever Mabel the Cat

Clever Mabel the CatClever Mabel the Cat

•Mabel claws new La-Z-Boy BAD!

•Cats hate water spray bottle = negative reinforcement

•Mabel claws La-Z-Boy Todd gets up Todd sprays Mabel Mabel gets negative feedback

•Mabel learns…

Clever Mabel the CatClever Mabel the Cat

•Mabel learns to run when Todd gets up.

•Mabel first learns local causality:–Todd gets up Todd sprays Mabel

•Mabel eventually sees no correlation, learns indirect cause

•Mabel happily claws carpet. The End.

BackgammonBackgammon

•Tesauro’s Neurogammon–Reinforcement Learning + Neural

Network (memory for learning)– Learned backgammon through

self-play–Got better than all but a handful

of people in the world!–Downside: Took 1.5 million

games to learn

Greased PigGreased Pig

•My continuous variant of Pig•Object: First to score 100 points•On your turn, generate a

random number from 0.5 to 6.5 until:–Your rounded number is 1, and

you score NOTHING.–You hold, and KEEP the sum.

•How does this change things?

Greased Pig ChallengesGreased Pig Challenges

•Infinite possible game states

•Infinite possible games•Limited experience•Limited memory•Learning and approximation challenge

SummarySummary

•Solving equations can only take you so far (but much farther than we can fathom).

•Machine learning is an exciting area of research that can take us farther.

•The power of computing will increasingly aid in our learning in the future.

top related