quiz!!

Post on 06-Jan-2016

24 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

QUIZ!!. T/F: Forward sampling is consistent. True T/F: Rejection sampling is faster, but inconsistent. False T/F: Rejection sampling requires less memory than Forward sampling. True T/F: In likelihood weighted sampling you make use of every single sample. True - PowerPoint PPT Presentation

TRANSCRIPT

QUIZ!!

T/F: Forward sampling is consistent. True T/F: Rejection sampling is faster, but inconsistent. False T/F: Rejection sampling requires less memory than Forward sampling. True T/F: In likelihood weighted sampling you make use of every single sample. True T/F: Forward sampling samples from the joint distribution. True T/F: L.W.S. only speeds up inference for r.v.’s downstream of evidence. True T/F: Too few samples could result in division by zero. True

1

xkcd

CSE511a: Artificial IntelligenceFall 2013

Lecture 18: Decision Diagrams

04/03/2013

Robert Pless,

via Kilian Q. Weinberger

Several slides adapted from Dan Klein – UC Berkeley

Announcements

Class Monday April 8 cancelled in lieu of talk by Sanmay Das:

“Bayesian Reinforcement Learning With Censored Observations and Applications in

Market Modeling and Design”

Friday, April 5,

11-12

Lopata 101.3

Sampling Example

There are 2 cups. The first contains 1 penny and 1 quarter The second contains 2 quarters

Say I pick a cup uniformly at random, then pick a coin randomly from that cup. It's a quarter (yes!). What is the probability that the other coin in that cup is also a quarter?

Recap: Likelihood Weighting

Sampling distribution if z sampled and e fixed evidence

Now, samples have weights

Together, the weighted sampling distribution is consistent

5

Cloudy

R

C

S

W

Likelihood Weighting

6

+c 0.5-c 0.5

+c+s 0.1

-s 0.9-c +s 0.5

-s 0.5

+c+r 0.8

-r 0.2-c +r 0.2

-r 0.8

+s

+r+w 0.99

-w 0.01

-r

+w 0.90

-w 0.10-s +r +w 0.90

-w 0.10-r +w 0.01

-w 0.99

Samples:

+c, +s, +r, +w…

Cloudy

Sprinkler Rain

WetGrass

Cloudy

Sprinkler Rain

WetGrass

Inference: Sum over weights that match query value Divide by total sample weight What is P(C|+w,+s)?

Likelihood Weighting Example

7

Cloudy Rainy Sprinkler Wet Grass Weight0 1 1 1 0.4950 0 1 1 0.450 0 1 1 0.450 0 1 1 0.451 0 1 1 0.09

Likelihood Weighting Likelihood weighting is good

Takes evidence into account as we generate the sample

At right: W’s value will get picked based on the evidence values of S, R

More samples will reflect the state of the world suggested by the evidence

Likelihood weighting doesn’t solve all our problems Evidence influences the choice of downstream

variables, but not upstream ones (C isn’t more likely to get a value matching the evidence)

We would like to consider evidence when we sample every variable

8

Cloudy

Rain

C

S R

W

Markov Chain Monte Carlo* Idea: instead of sampling from scratch, create samples that

are each like the last one.

Procedure: resample one variable at a time, conditioned on all the rest, but keep evidence fixed. E.g., for P(b|c):

Properties: Now samples are not independent (in fact they’re nearly identical), but sample averages are still consistent estimators!

What’s the point: both upstream and downstream variables condition on evidence.

9

+a +c+b +a +c-b -a +c-b

Gibbs Sampling

1. Set all evidence E1,...,Em to e1,...,em

2. Do forward sampling t obtain x1,...,xn

3. Repeat:1. Pick any variable Xi uniformly at random.

2. Resample xi’ from p(Xi | markovblanket(Xi))

3. Set all other xj’=xj

4. The new sample is x1’,..., xn’

10

Markov Blanket

11

X

Markov blanket of X: 1.All parents of X2.All children of X3.All parents of children of X (except X itself)

X is conditionally independent from all other variables in the BN, given all variables in the markov blanket (besides X).

MCMC algorithm

12

The Markov Chain

13

Summary

Sampling can be your salvation Dominant approach to inference in BNs Pros/Conts:

Forward Sampling Rejection Sampling Likelihood Weighted Sampling Gibbs Sampling/MCMC

14

Google’s PageRank

15

http://en.wikipedia.org/wiki/Markov_chain

Page, Lawrence; Brin, Sergey; Motwani, Rajeev and Winograd, Terry (1999). The PageRank citation ranking: Bringing order to the Web.

See also: J. Kleinberg. Authoritative sources in a hyperlinked environment. Proc. 9th ACM-SIAM Symposium on Discrete Algorithms, 1998.

Google’s PageRank

16

AA

BB

CC

DD EE

FF

HH

II

JJ

GG

Graph of the Internet (pages and links)

Google’s PageRank

17

AA

BB

CC

DD EE

FF

HH

II

JJ

GG

Start at a random page, take a random walk. Where do we end up?

Google’s PageRank

18

AA

BB

CC

DD EE

FF

HH

II

JJ

GG

Add 15% probability of moving to a random page. Now where do we end up?

Google’s PageRank

19

AA

BB

CC

DD EE

FF

HH

II

JJ

GG

PageRank(P) = Probability that a long random walk ends at node P

20

Learning

21

Where do the CPTs come from?

How do you build a Bayes Net to begin with? Two scenarios:

Complete data Incomplete data --- touch on, you are not

responsible for this.

22

Learning from complete data Observe data from real

world: Estimate probabilities by

counting:

(Just like rejection sampling, except we observe the real world.)

23

Cloudy Rainy Sprinkler Wet Grass0 0 1 11 1 0 11 0 0 00 0 1 10 0 1 10 0 1 11 1 0 10 0 1 11 0 0 01 1 0 10 0 1 11 1 0 10 0 1 11 1 0 11 1 0 10 1 0 10 0 1 0

(Indicator function I(a,b)=1 if and only if a=b.)

Learning from incomplete data

Observe real-world data Some variables are

unobserved Trick: Fill them in

EM algorithm E=Expectation M=Maximization

24

Cloudy Rainy Sprinkler Wet Grass1 ? ? 00 ? ? 11 ? ? 11 ? ? 01 ? ? 11 ? ? 10 ? ? 11 ? ? 01 ? ? 00 ? ? 10 ? ? 00 ? ? 11 ? ? 01 ? ? 11 ? ? 11 ? ? 01 ? ? 10 ? ? 11 ? ? 01 ? ? 0

EM-Algorithm

Initialize CPTs (e.g. randomly, uniformly) Repeat until convergence

E-Step: Compute for all i and x (inference)

M-Step: Update CPT tables

(Just like likelihood-weighted sampling, but on real-world data.) 25

EM-Algorithm

Properties: No learning rate Monotonic convergence Each iteration improves log-likelihood of data

Outcome depends on initialization Can be slow for large Bayes Nets ...

26

Decision Networks

27

Decision Networks MEU: choose the action which

maximizes the expected utility given the evidence

Can directly operationalize this with decision networks Bayes nets with nodes for utility

and actions Lets us calculate the expected

utility for each action

New node types: Chance nodes (just like BNs) Actions (rectangles, cannot

have parents, act as observed evidence)

Utility node (diamond, depends on action and chance nodes)

28

Weather

Forecast

Umbrella

U

Decision Networks

Action selection: Instantiate all

evidence Set action node(s)

each possible way Calculate posterior

for all parents of utility node, given the evidence

Calculate expected utility for each action

Choose maximizing action

29

Weather

Forecast

Umbrella

U

Example: Decision Networks

Weather

Umbrella

U

W P(W)

sun 0.7

rain 0.3

A W U(A,W)

leave sun 100

leave rain 0

take sun 20

take rain 70

Umbrella = leave

Umbrella = take

Optimal decision = leave

Decisions as Outcome Trees

Almost exactly like expectimax / MDPs

31

U(t,s)

Weather Weather

take leave

{}

sun

U(t,r)

rain

U(l,s) U(l,r)

rainsun

Evidence in Decision Networks

Find P(W|F=bad) Select for evidence

First we join P(W) and P(bad|W)

Then we normalize

Weather

Forecast

W P(W)

sun 0.7

rain 0.3

F P(F|rain)

good 0.1

bad 0.9

F P(F|sun)

good 0.8

bad 0.2

W P(W)

sun 0.7

rain 0.3

W P(F=bad|W)

sun 0.2

rain 0.9

W P(W,F=bad)

sun 0.14

rain 0.27

W P(W | F=bad)

sun 0.34

rain 0.66

Umbrella

U

Example: Decision Networks

33

Weather

Forecast=bad

Umbrella

U

A W U(A,W)

leave sun 100

leave rain 0

take sun 20

take rain 70

W P(W|F=bad)

sun 0.34

rain 0.66

Umbrella = leave

Umbrella = take

Optimal decision = take

Value of Information Idea: compute value of acquiring evidence

Can be done directly from decision network

Example: buying oil drilling rights Two blocks A and B, exactly one has oil, worth k You can drill in one location Prior probabilities 0.5 each, & mutually exclusive Drilling in either A or B has MEU = k/2

Question: what’s the value of information? Value of knowing which of A or B has oil Value is expected gain in MEU from new info Survey may say “oil in a” or “oil in b,” prob 0.5 each If we know OilLoc, MEU is k (either way) Gain in MEU from knowing OilLoc? VPI(OilLoc) = k/2 Fair price of information: k/2

34

OilLoc

DrillLoc

U

D O U

a a k

a b 0

b a 0

b b k

O P

a 1/2

b 1/2

Value of Information Assume we have evidence E=e. Value if we act now:

Assume we see that E’ = e’. Value if we act then:

BUT E’ is a random variable whose value isunknown, so we don’t know what e’ will be

Expected value if E’ is revealed and then we act:

Value of information: how much MEU goes upby revealing E’ first:

VPI == “Value of perfect information”

P(s | e)

{e}a

U

{e, e’}a

P(s | e, e’)U

{e}

P(e’ | e)

{e, e’}

VPI Example: Weather

36

Weather

Forecast

Umbrella

U

A W U

leave sun 100

leave rain 0

take sun 20

take rain 70

MEU with no evidence

MEU if forecast is bad

MEU if forecast is good

F P(F)

good 0.59

bad 0.41

Forecast distribution

VPI Properties

Nonnegative

Nonadditive ---consider, e.g., obtaining Ej twice

Order-independent

37

Quick VPI Questions

The soup of the day is either clam chowder or split pea, but you wouldn’t order either one. What’s the value of knowing which it is?

There are two kinds of plastic forks at a picnic. It must be that one is slightly better. What’s the value of knowing which?

You have $10 to bet double-or-nothing and there is a 75% chance that the Rams will beat the 49ers. What’s the value of knowing the outcome in advance?

You must bet on the Rams, either way. What’s the value now?

That’s all

Next Lecture Hidden Markov Models! (You gonna love it!!)

Reminder, next lecture is a week from today (on Wednesday).

Go to talk by Sanmay Das, Lopata 101, Friday 11-12.

39

top related