4. markov chains

42
4. Markov Chains A discrete time process {X n ,n =0, 1, 2,...} with discrete state space X n ∈{0, 1, 2,...} is a Markov chain if it has the Markov property: P[X n+1 = j |X n = i, X n1 = i n1 ,...,X 0 = i 0 ] = P[X n+1 = j |X n = i] In words, “the past is conditionally independent of the future given the present state of the process” or “given the present state, the past contains no additional information on the future evolution of the system.” The Markov property is common in probability models because, by assumption, one supposes that the important variables for the system being modeled are all included in the state space. We consider homogeneous Markov chains for which P[X n+1 = j | X n = i]= P[X 1 = j | X 0 = i]. 1

Upload: vuongtram

Post on 09-Feb-2017

242 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 4. Markov Chains

4. Markov Chains

• A discrete time process {Xn, n = 0, 1, 2, . . .}with discrete state space Xn ∈ {0, 1, 2, . . .} is a

Markov chain if it has the Markov property:

P[Xn+1= j|Xn= i,Xn−1= in−1, . . . , X0= i0]

= P[Xn+1= j|Xn= i]

• In words, “the past is conditionally independent

of the future given the present state of the

process” or “given the present state, the past

contains no additional information on the future

evolution of the system.”

• The Markov property is common in probability

models because, by assumption, one supposes

that the important variables for the system being

modeled are all included in the state space.

• We consider homogeneous Markov chains for

which P[Xn+1= j |Xn= i] = P[X1= j |X0= i].

1

Page 2: 4. Markov Chains

Example: physical systems. If the state space

contains the masses, velocities and accelerations of

particles subject to Newton’s laws of mechanics,

the system in Markovian (but not random!)

Example: speech recognition. Context can be

important for identifying words. Context can be

modeled as a probability distrubtion for the next

word given the most recent k words. This can be

written as a Markov chain whose state is a vector

of k consecutive words.

Example: epidemics. Suppose each infected

individual has some chance of contacting each

susceptible individual in each time interval, before

becoming removed (recovered or hospitalized).

Then, the number of infected and susceptible

individuals may be modeled as a Markov chain.

2

Page 3: 4. Markov Chains

• Define Pij = P[

Xn+1= j |Xn= i]

.

Let P = [Pij ] denote the (possibly infinite)

transition matrix of the one-step transition

probabilities.

• Write P 2ij =

∑∞k=0 PikPkj , corresponding to

standard matrix multiplication. Then

P 2ij =

k

P[

Xn+1= k |Xn= i]

P[

Xn+2= j |Xn+1= k]

=∑

k

P[

Xn+2= j,Xn+1= k |Xn= i]

(via the Markov property. Why?)

= P

[

k

{

Xn+2= j,Xn+1= k}

Xn= i

]

= P[

Xn+2= j |Xn= i]

• Generalizing this calculation:

The matrix power Pnij gives the n-step transi-

tion probabilities.

3

Page 4: 4. Markov Chains

• The matrix multiplication identity

Pn+m = PnPm

corresponds to the Chapman-Kolmogorov

equation

Pn+mij =

∑∞k=0 P

nikP

mkj .

• Let ν(n) be the (possibly infinite) row vector of

probabilities at time n, so ν(n)i = P

[

Xn= i]

.

Then ν(n) = ν(0)Pn, using standard

multiplication of a vector and a matrix. Why?

4

Page 5: 4. Markov Chains

Example. Set X0 = 0,

and let Xn evolves as a

Markov chain with transi-

tion matrix

P =

1/2 1/2 0

0 0 1

1 0 0

.

0

2

1

1/2

✟✟

✟✟✟✯

1/2

❍❍

❍❍❍❨

1

1

Find ν(n)0 = P[Xn=0] by

(i) using a probabilistic argument

5

Page 6: 4. Markov Chains

(ii) using linear algebra.

6

Page 7: 4. Markov Chains

Classification of States

• State j is accessible from i if P kij > 0 for some

k ≥ 0.

• The transition matrix can be represented as a

directed graph with arrows corresponding to

positive one-step transition probabilities j is

accessible from i if there is a path from i to j.

For example,

0 1 2 3 4 . . .✲ ✲ ✲ ✲ ✲✌ ✌ ✌ ✌ ✌

Here, 4 is accessible from 0, but not vice versa.

• i and j communicate if they are accessible

from each other. This is written i ↔ j, and is an

equivalence relation, meaning that

(i) i ↔ i [reflexivity]

(ii) If i ↔ j then j ↔ i [symmetry]

(iii) If i ↔ j and j ↔ k then i ↔ k [transitivity]

7

Page 8: 4. Markov Chains

• An equivalence relation divides a set (here, the

state space) into disjoint classes of equivalent

states (here, called communication classes).

• A Markov chain is irreducible if all the states

communicate with each other, i.e., if there is

only one communication class.

• The communication class containing i is

absorbing if Pjk = 0 whenever i ↔ j but i 6↔ k

(i.e., when i communicates with j but not with

k). An absorbing class can never be left. A

partial converse is . . .

Example: Show that a communication class,

once left, can never be re-entered.

8

Page 9: 4. Markov Chains

• State i has period d if Pnii = 0 when n is not a

multiple of d and if d is the greatest integer with

this property. If d = 1 then i is aperiodic.

Example: Show that all states in the same

communication class have the same period.

Note: This is “obvi-

ous” by considering a

generic directed graph

for a periodic state:

i i+1 i+2

i+d-1

✲ ✲ ✲ ✲

✻❄

. . .✛. . .✛

9

Page 10: 4. Markov Chains

• State i is recurrent if P[

re-enter i |X0= i] = 1,

where {re-enter i} is the event⋃∞

n=1 {Xn = i,Xk 6= i for k = 1, . . . , n− 1}.If i is not recurrent, it is transient.

• Let S0 = 0 and S1, S2, . . . be times of successive

returns to i, with Ni(t) = max {n : Sn ≤ t} being

the corresponding counting process.

• If i is recurrent, then Ni(t) is a renewal process,

since the Markov property gives independence of

interarrival times XAn = Sn − Sn−1.

Letting µii = E[XA1 ], the expected return

time for i, we then have the following from

renewal theory:

⋄ P[

limt→∞ Ni(t)/t = 1/µii |X0= i]

= 1

⋄ If i ↔ j, limn→∞∑n

k=1 Pkij/n = 1/µjj

⋄ If i is aperiodic, limn→∞ Pnij = 1/µjj for j ↔ i

⋄ If i has period d, limn→∞ Pndii = d/µii

10

Page 11: 4. Markov Chains

• If i is transient, then Ni(t) is a defective

renewal process. This is a generalization of

renewal processes where XA1 , XA

2 , . . . are still iid,

abut we allow P[XA1 =∞] > 0.

Proposition: i is recurrent if and only if∑∞

n=1 Pnii = ∞.

11

Page 12: 4. Markov Chains

Example: Show that if i is recurrent and i ↔ j

then j is recurrent.

Example: If i is recurrent and i ↔ j, show that

P[

never enter state j |X0= i]

= 0.

12

Page 13: 4. Markov Chains

• If i is transient, then µii = ∞. If i is recurrent

and µii < ∞ then i is said to be positive

recurrent. Otherwise, if µii = ∞, i is null

recurrent.

Proposition: If i ↔ j and i is recurrent, then

either i and j are both positive recurrent, or both

null recurrent (i.e., positive/null recurrence is a

property of communication classes).

13

Page 14: 4. Markov Chains

Random Walks

• The simple random walk is a Markov chain

on the integers, Z = {. . . ,−1, 0, 1, 2, . . .} with

X0 = 0 and P[

Xn+1=Xn + 1]

= p,

P[

Xn+1=Xn − 1]

= 1− p.

Example: If Xn counts the number of successes

minus the number of failures for a new medical

procedure, Xn could be modeled as a random

walk, with p the success rate of the procedure.

When should the trial be stopped?

• If p = 1/2, the random walk is symmetric.

• The symmetric random in d dimensions is a

vector valued Markov chain, with state space Zd,

X(d)0 = (0, . . . , 0). Two possible definitions are

(i) Let X(d)n+1 −X

(d)n take each of the 2d

possibilities (±1,±1, . . . ,±1) with equal

probability.

(ii) Let X(d)n+1 −X

(d)n take one of the 2d values

(±1, 0, . . . , 0), (0,±1, 0, . . . , 0), . . . with equal

probability. This is harder to analyze.

14

Page 15: 4. Markov Chains

Example: A biological signaling molecule

becomes separated from its receptor. It starts

diffusing, due to themal noise. Suppose the

diffusion is well modeled by a random walk. Will

the molecule return to the receptor? If the

molecule is constrained to a one-dimensional line?

A two-dimensional surface? Three-dimensional

space?

Proposition: The symmetric random walk is

null recurrent when d = 1 and d = 2, but

transient for d ≥ 3.

Proof: The method is to employ Stirling’s

formula, n! ∼ nn+1/2e−n√2π where an ∼ bn

means limn→∞(an/bn) = 1, to approximate

P[

X(d)n =X

(d)0

]

.

Note: This is in HW 5. You are expected to solve

the simpler case (i), though you can solve (ii) if

you want a bigger challenge.

15

Page 16: 4. Markov Chains

Stationary Distributions

• π = {πi, i = 0, 1, . . .} is a stationary

distribution for P = [Pij ] if πj =∑∞

i=0 πiPij

with πi ≥ 0 and∑∞

i=0 πi = 1.

• In matrix notation, πj =∑∞

i=0 πiPij is π = πP

where π is a row vector.

Theorem: An irreducible, aperiodic, positive

recurrent Markov chain has a unique stationary

distribution, which is also the limiting

distribution

πj = limn→∞ Pnij .

• Such Markov chains are called ergodic.

Proof

16

Page 17: 4. Markov Chains

Proof continued

17

Page 18: 4. Markov Chains

• Irreducible chains which are transient or null

recurrent have no stationary distribution. Why?

• Chains which are periodic or which have

multiple communicating classes may have

limn→∞ Pnij not existing, or depending on i.

• A chain started in a stationary distribution will

remain in that distribution, i.e., will result in a

stationary process.

• If we can find any probability distribution

solving the stationarity equations π = πP and

we check that the chain is irreducible and

aperiodic, then we know that

(i) The chain is positive recurrent.

(ii) π is the unique stationary distribution.

(iii) π is the limiting distribution.

18

Page 19: 4. Markov Chains

Example: Monte Carlo Markov Chain

• Suppose we wish to evaluate E[

h(X)]

where X

has distribution π (i.e., P[

X= i]

= πi). The

Monte Carlo approach is to generate

X1, X2, . . . Xn ∼ π and estimate

E[

h(X)]

≈ 1n

∑ni=1 h(Xi)

• If it is hard to generate an iid sample from π,

we may look to generate a sequence from a

Markov chain with limiting distribution π.

• This idea, called Monte Carlo Markov

Chain (MCMC), was introduced by

Metropolis and Hastings (1953). It has become a

fundamental computational method for the

physical and biological sciences. It is also

commonly used for Bayesian statistical inference.

19

Page 20: 4. Markov Chains

Metropolis-Hastings Algorithm

(i) Choose a transition matrix Q = [qij ]

(ii) Set X0 = 0

(iii) for n = 1, 2, . . .

⋄ generate Yn with P[

Yn= j |Xn−1= i]

= qij .

⋄ If Xn−1 = i and Yn = j, set

Xn =

j with probability min(1, πjqji/πiqij)

i otherwise

• Here, Yn is called the proposal and we say the

proposal is accepted with probability

min(1, πjqji/πiqij). If the proposal is not

accepted, the chain stays in its previous state.

20

Page 21: 4. Markov Chains

Proposition: Set

Pij=

qij min(1, πjqji/πiqij) j 6= i

qii +∑

k 6=i

qik {1−min(1, πkqki/πiqik)} j = i

Then π is a stationary distribution of the

Metropolis-Hastings chain {Xn}. If Pij is

irreducible and aperiodic, then π is also the

limiting distribution.

Proof.

21

Page 22: 4. Markov Chains

Example: (spatial models on a lattice)

Let {Yij , 1≤i≤m, 1≤j≤n}be a spatial stochastic

process. Let N (i, j)

define a neighbor-

hood , e.g. N (i, j) =

{(p, q) : |p− i|+ |q − j| = 1} ◦ ◦ ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦

• We may model the conditional distribution of

Yij given the neighbors. E.g., if your neighbors

vote Republican (R), you are more likely to do

so. Say,

P[

Yij=R | {Ypq, (p, q) 6= (i, j)}]= 1

2 + α[(

(p,q)∈N (i,j) I{Ypq=R}

)

− 2]

.

• The full distribution π of Y is, in principle,

specified by the conditional distributions. In

practice, one simulates from π using MCMC,

where the proposal distribution is to pick a

random (i, j) and swap it from R to D or vice

versa.

22

Page 23: 4. Markov Chains

Example: Sampling conditional distributions.

• If X and Θ have joint density fXΘ(x, θ) and we

observe X = x, then one wants to sample from

fΘ|X(θ |x) = fXΘ(x,θ)fX(x) ∝ fΘ(θ)fX|Θ(x | θ)

• For Bayesian inference, fΘ(θ) is the prior

distribution, and fΘ|X(θ |x) is the posterior.

The model determines fΘ(θ) and fX|Θ(x | θ).The normalizing constant,

fX(x) =∫

fXΘ(x, θ) dθ,

is often unknown.

• Using MCMC to sample from the posterior

allows numerical evaluation of the

posterior mean E[

Θ |X=x]

,

posterior variance Var(

Θ |X=x)

,

or a 1− α credible region defined to be a set A

such that P[

Θ ∈ A |X=x]

= 1− α.

23

Page 24: 4. Markov Chains

Galton-Watson Branching Process

• Let Xn be the size of a population at time n.

Suppose each individual has an iid offspring

distribution, so Xn+1 =∑Xn

k=1 Z(k)n+1 where

Z(k)n+1 ∼ Z.

• We can think of Z(k)n+1 = 0 being the death of

the parent, or think of Xn as the size of the nth

generation.

• Suppose X0 = 1. Notice that Xn = 0 is an

absorbing state, termed extinction. Supposing

P[Z = 0] > 0 and P[Z > 1] > 0, we can show

that 1, 2, 3, . . . are transient states. Why?

• Thus, the branching process either becomes

extinct or tends to infinity.

24

Page 25: 4. Markov Chains

• Set φ(s) = E[sZ ], the probability generating

function for Z. Set φn(s) = E[sXn ]. Show that

φn(s) = φ(n)(s) = φ(φ(. . . φ(s) . . .)), meaning φ

applied n times.

25

Page 26: 4. Markov Chains

• If we can find a solution to φ(s) = s, we will

have φn(s) = s for all n. This suggests plotting

φ(s) vs s, noting that

(i) φ(1) = 1. Why?

(ii) φ(0) = P[Z=0]. Why?

(iii) φ(s) is increasing, i.e., dφds > 0. Why?

(iv) φ(s) is convex, i.e., d2φds2 > 0. Why?

26

Page 27: 4. Markov Chains

• Now, notice that, for 0 < s < 1,

P[

extinction]

= limn→∞

P[

Xn=0]

= limn→∞

φn(s).

• Argue that, in CASE 1, limn→∞ φn(s) is the

unique fixed point φ(s) = s for 0 < s < 1. In

CASE 2, limn→∞ φn(s) = 1.

27

Page 28: 4. Markov Chains

• Conclude that in CASE 1 (with dφds | s=1 > 1, i.e.

E[Z] > 1) there is some possibility of infinite

growth. In CASE 2 (with dφds | s=1 ≤ 1, i.e.

E[Z] ≤ 1) extinction is assured.

• Example: take

Z =

0 w.p. 1/4

1 w.p. 1/4

2 w.p. 1/2

.

Find the chance of extinction.

• Now suppose the founding population has size k

(i.e., X0 = k). Find the chance of extinction.

28

Page 29: 4. Markov Chains

Time Reversal

• Thinking backwards in time can be a powerful

strategy. For any Markov chain

{Xk, k = 0, 1, . . . , n}, we can set Yk = Xn−k.

Then Yk is a Markov chain. Suppose Xk has

transition matrix Pij , then Yk has

inhomogeneous transition matrix P∗[k]ij , where

P∗[k]ij = P

[

Yk+1= j |Yk= i]

= P[

Xn−k−1= j |Xn−k= i]

=P[

Xn−k= i |Xn−k−1= j]

P[

Xn−k−1= j]

P[

Xn−k= i]

= Pji P[

Xn−k−1= j] /

P[

Xn−k= i]

.

• If {Xk} is stationary, with stationary

distribution π, then {Yk} is homogeneous, with

P ∗ij = Pjiπj/πi

• Surprisingly many important chains have the

time-reversibility property P ∗ij = Pij , for

which the chain looks the same forwards as

backwards.

29

Page 30: 4. Markov Chains

Theorem. (detailed balance equations)

If π is a probability distribution with

πiPij = πjPji, then π is a stationary distribution

for P and the chain started at π is time-reversible.

Proof

• The detailed balance equations are simpler than

the general equations for finding π. Try to solve

them when looking for π, in case you get lucky!

• Intuition: πiPij is the “rate of going directly

from i to j.” A chain is reversible if this equals

“rate of going directly from j to i.”

• Periodic chains (π does not give limiting

probabilities) and reducible chains (π is not

unique) are both OK here.

30

Page 31: 4. Markov Chains

• An equivalent condition for reversibility of

ergodic Markov chains is “every loop of states

starting and ending in i has the same probability

as the reverse loop,” i.e., for all i1, . . . , ik,

Pii1Pi1,i2 . . . Pik−1ikPiki = PiikPikik−1. . . Pi2i1Pi1i

Proof

31

Page 32: 4. Markov Chains

Example: Birth-Death Process.

Let P[

Xn=Xn+1 |Xn= i]

= pi and

P[

Xn=Xn − 1 |Xn= i]

= qi = 1− pi.

• If {Xn} is positive recurrent, it must

be reversible. Heuristic reason:

• Find the stationary distribution giv-

ing conditions for its existence.

0

1

2

3

4

...

✕☛

✕☛

✕☛

✕☛

p0=1

p1

p2

p3

q1

q2

q3

q4

• Note that this chain is periodic (d = 2). There

is still a unique stationary distribution which

solves the detailed balance equations.

32

Page 33: 4. Markov Chains

Example: Metropolis Algorithm. The

Metropolis-Hastings algorithm with symmetric

proposals (qij = qji) is the Metropolis

algorithm. Here,

Pij =

qij min(1, πj/πi) j 6= i

qii +∑

k 6=i

qik {1−min(1, πk/πi)} j = i

• Show that Pij is reversible, with stationary

distribution π.

Solution

33

Page 34: 4. Markov Chains

Random Walk on a Graph

• Consider undirected, connected graph with a

positive weight wij on each edge ij. Set

wij = 0 if i & j are not connected by an edge.

• Set Pij =wij

k wik

• The Markov chain {Xn} with transition matrix

P is called a random walk on a graph. Think of a

driver who is lost: each vertex is an intersection,

the weight is the width (in lanes) of a road

leaving an intersection, the driver picks a random

exit when he/she arrives at an intersection but

has a preference for bigger streets.

4 5

3

1

0

2

✚✚

✚✚

❩❩

❩❩

❩❩

❩❩

✚✚

✚✚

✚✚

✚✚

❩❩

❩❩

1

1

1 2

2

2

2

34

Page 35: 4. Markov Chains

• Show that the random walk on a graph is

reversible, and has stationary distribution

πi =∑

j wij

/∑

jk wjk

Note: the double sum counts each weight twice.

• Hence, find the limiting probability of being at

vertex 3 on the graph shown above.

35

Page 36: 4. Markov Chains

Ergodicity

• A stochastic process {Xn} is ergodic if limiting

time averages equal limiting probabilities, i.e.,

limn→∞

time in state i up to time n

n= lim

n→∞P[Xn= i].

• Show that an irreducible, aperiodic, positive

recurrent Markov chain is ergodic (i.e., this

general idea of ergodicity matches our definition

for Markov chains).

36

Page 37: 4. Markov Chains

Semi-Markov Processes

• A Semi-Markov process is a discrete state,

continuous time process {Z(t), t ≥ 0} for which

the sequence of states visited is a Markov chain

{Xn, n = 0, 1, . . .} and, conditional on {Xn}, thetimes between transitions are independent.

Specifically, define transition times S0, S1, . . . by

S0 = 0 and Sn − Sn−1 ∼ Fij conditional on

Xn−1= i and Xn= j. Then, Z(t) = XN(t) where

N(t) = sup {n : Sn ≤ t}.• {Xn} is the embedded Markov chain.

• The special case where Fij ∼ Exponential (νi) is

called a continuous time Markov chain

• The special case when

Fij(t) =

0, t < 1

1, t ≥ 1

results in Z(n) = Xn, so we retrieve the discrete

time Markov chain.

37

Page 38: 4. Markov Chains

Example: A taxi serves the airport, the hotel

district and the theater district. Let Z(t) = 0, 1, 2

if the taxi’s current destination is each of these

locations respectively. The time to travel from i

to j has distribution Fij . Upon arrival at i, the

taxi immediately picks up a new customer whose

destination has distribution given by Pij . Then,

Z(t) is a semi-Markov process.

• To study the limiting behavior of Z(t) we make

some definitions. . .

• Say {Z(t)} is irreducible if {Xn} is irreducible

• Say {Z(t)} is non-lattice if {N(t)} is

non-lattice

• Set Hi to be the c.d.f. of the time in state i

before a transition, so Hi(t) =∑

j PijFij(t)

and the expected time of a visit to i is

µi =∫∞

0xdHi(x) =

j Pij

∫∞

0x dFij(x).

• Let Tii be the time between successive visits to

i and define µii = E[

Tii

]

.

38

Page 39: 4. Markov Chains

Proposition: If {Z(t)} is an irreducible,

non-lattice semi-Markov process with µii < ∞,

then

limt→∞

P[

Z(t)= i |Z(0)= j]

=µi

µii

= limt→∞

time in i before t

tw.p. 1

Proof: identify a relevant alternating renewal

process.

39

Page 40: 4. Markov Chains

• By counting up time spent in i differently, we

get another identity

limt→∞

time in i during [0, t]

t=

πiµi∑

j πjµj

supposing that {Xn} is ergodic (irreducible,

aperiodic, positive recurrent) with stationary

distribution π.

Proof

40

Page 41: 4. Markov Chains

• Calculations with semi-Markov models typically

involve identifying a suitable renewal, alternating

renewal, regenerative or reward process.

Example Let Y (t) = SN(t)+1 − t be the residual

life process for an ergodic, non-lattice

semi-Markov model. Find an expression for

limt→∞ P[

Y (t) > x]

.

Hint: Consider an alternating renewal process

which switches “on” when Z(t) enters i, and

switches “off” immediately if the next transition

is not j or switches “off” once the time until the

transition to j is less than x.

41

Page 42: 4. Markov Chains

Proof (continued)

42