markov chains - utkweb.eecs.utk.edu/~roberts/ece504/presentationslides/markov... · discrete-time...

96
Markov Chains

Upload: ngotuong

Post on 20-Jan-2019

245 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Markov Chains

Page 2: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chains

A discrete-time Markov chain is a discrete-time , discrete-value

random sequence such that the next random variable X n+1⎡⎣ ⎤⎦

depends only on X n⎡⎣ ⎤⎦ through the transition probability

Pij = P X n+1⎡⎣ ⎤⎦ = j | X n⎡⎣ ⎤⎦ = i( ) = P X n+1⎡⎣ ⎤⎦ = j | X n⎡⎣ ⎤⎦ = i,X n−1⎡⎣ ⎤⎦ = in−1,,X 0⎡⎣ ⎤⎦ = i0( )X n⎡⎣ ⎤⎦ is called the state of the system which produces the Markov

chain and the sample space of X n⎡⎣ ⎤⎦ is called the state space. The

transition probabilities of a Markov chain satisfy Pij ≥ 0 , Pijj=0

∑ = 1.

The sum of all the probabilities of going from state i to any of the other states in the state space is one.

Page 3: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Diagrams

Node Node

Transition Probability

Transition Probability

State 0 State 1

Directed Arc Directed Arc

Directed Arc

Page 4: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State-Transition Matrix

{ }( )2

A Markov chain can have a finite number of states or a countableinfinity of states. In a system with a state space 0,1,2, ,

there are 1 transition probabilities and they can be representedby a

N

N +sta

L

00 01 0

10 11 1

0 1

of the form

The elements in each row must sum to one.

N

N

N N NN

P P PP P P

P P P

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

te - transition matrix

P

LL

M M O ML

Page 5: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Two-State Example

( )( )

00 01

01

Find the probability that if the initial state is 0, the state after two steps is 1.There are two ways this can happen 0 0 1 and 0 1 1. P 0 0 1 0.6 0.4 0.24

P 0 1 1

P P

P

→ → → →→ → = = × =

→ → = 11 0.4 0.3 0.12P = × =

Page 6: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Two-State Example

( ) 00 01 01 11

The two transition sequences are mutually exclusive so theprobability of transitioning from 0 to 1 in two steps is the sum of their probabilities.P 0 to 1 in two steps 0.24 0.12 0.36The s

P P P P= + = + =

2

tate-transition matrix for this Markov chain is0.6 0.4

0.7 0.3

Notice that if we square this matrix we get0.64 0.36

0.6

⎡ ⎤= ⎢ ⎥⎣ ⎦

=

P

P3 0.37

⎡ ⎤⎢ ⎥⎣ ⎦

Page 7: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Two-State Example

( ) ( )

00 01 00 01 00 00 01 10 002

10 11 10 11

We can see why by looking at the details of the matrix-squaring process. P 0 0 in 2 steps P 0 1 in 2 steps

P P P P P P P P P PP P P P

→ →

+⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

P

( ) ( )

01 01 11

10 00 11 10 10 01 11 11

P 1 0 in 2 steps P 1 1 in 2 steps

P PP P P P P P P P

+⎡ ⎤⎢ ⎥+ +⎣ ⎦

→ →

Page 8: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

{ }

[ ]

[ ] [ ] [ ]

In a Markov chain with a state space 0,1,2, , the state-

transition matrix for steps is . It then follows that

n

N

n n

n m n m

=

+ =

P P

P P P

L

Page 9: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

[ ]3

3

Example:Given this Markov chain find the state-transition matrix for 3 steps.

0.2 0.8 00.2 0.3 0.50.4 0.6 0

0.2 0.8 0 0.28 0.52 0.23 0.2 0.3 0.5 0.23 0.495 0.275

0.4 0.6 0 0.26 0.49 0.25

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= = =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

P

P P

Page 10: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

1

Raising a matrix to a power can be done most efficiently using eigenvalues and eigenvectors. A square matrix can bediagonalized into where is a matrix consisting ofthe eigenvectors of a

−= ΛA

A S S SA

1

s its columns and is a diagonal matrixwhose non-zero elements are the eigenvalues of . Then the th power of is . Raising to the th power is

much simpler than raising to the th p

n nn nn

Λ

= Λ ΛA

A A S SA ower because it is diagonal.

Its th power is the diagonal matrix in which each eigenvalue is individually raised to the th power.n

n

Page 11: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

Consider a the two-state Markov chain with probability for the 0-1 transition and probability for the 1-0 transition. The state-transition matrix is

1

1

pq

p pq q−⎡ ⎤

= ⎢ −⎣ ⎦P

( ) ( )( )

( )1,2

The eigenvalues are found by solving

det 0 1 1 0

The two solutions are 1,1 .

p q pq

p q

λ λ λ

λ

− = ⇒ − − − − − =

= − +

P I

Page 12: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

( )

( )

1

2

1 2 2

The eigenvectors are found by solving 0.

x1 1 0 0

x1 0 1

From the top equation,

1 x x 0 x

i i i i i

p pq q

p p

λ λ

λ

λ

= ⇒ − =

⎛ ⎞− ⎡ ⎤⎡ ⎤ ⎡ ⎤− =⎜ ⎟ ⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎝ ⎠

− − + = ⇒

Px x P I x

11 xp

pλ+ −=

Page 13: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

( )1,2

1 2

11 1

2

For the two eigenvalues 1,1 we get the two eigenvectors,

1 1 and

1 /Then

1 1 1 10

1 / 1 /0

nn n

n

p q

q p

q p q p

λ

λλ

−−

= − +

⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦

⎡ ⎤⎡ ⎤ ⎡ ⎤= Λ = ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦⎣ ⎦

x x

P S S

Page 14: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

( )

( )1 2 1 2

1 2 1 2

Multiplying out the matrices,

1

Since one eigenvalue is one (which will always be true for a state-transition matrix),

1

n n n n

n

n n n n

n

q p p

p q q p q

p

λ λ λ λ

λ λ λ λ

⎡ ⎤+ −⎢ ⎥=

+ ⎢ ⎥− +⎣ ⎦

=+

P

P( )

( )2 2

2 2

1

1

n n

n n

q p p

q q p q

λ λ

λ λ

⎡ ⎤+ −⎢ ⎥⎢ ⎥− +⎣ ⎦

Page 15: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

( )( )

2 2

2 2

11 1

If the second eigenvalue has a magnitude less than one, as approaches infinity, the elements in approach finite limits.The second eigenvalue

n n

n

n n

n

q p p

p q q p q

n

λ λ

λ λ

⎡ ⎤+ −⎢ ⎥=

+ ⎢ ⎥− +⎣ ⎦P

P

( )

( ) ( )( ) ( )

is 1 . If 0 1 or 0 1,then this second eigenvalue is less than one in magnitude. If

1, the second eigenvalue is 1 and

1 1 1 11 2 1 1 1 1

The elements

n nn

n n

p q p q

p q

− + < < < <

= = −

⎡ ⎤+ − − −⎢ ⎥=⎢ ⎥− − + −⎣ ⎦

P

of oscillate between 1 and 1.n −P

Page 16: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

( )( )

2 2

2 2

11 1

If 0, the second eigenvalue is +1 and

1 0

0 1

and neither state ever transitions to the other state.

n n

n

n n

n

q p p

p q q p q

p q

λ λ

λ λ

⎡ ⎤+ −⎢ ⎥=

+ ⎢ ⎥− +⎣ ⎦

= =

⎡ ⎤= ⎢ ⎥⎣ ⎦

P

P

Page 17: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

[ ]

[ ] [ ]

A is a column vector of probabilities that the Markov chain is in each allowable state at time n. Given a starting probability vector 0 it is possible to compute

n

n

state probability vector p

p p

[ ][ ] [ ]

[ ] [ ]

using .

0

More generally, .

T T n

T T m

n

n

n m n

=

+ =

P

p p P

p p P

Page 18: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

[ ]

[ ] [ ][ ] [ ] [ ][ ] [ ] [ ][ ]

2

For this Markov chain find the state-probability vector for times, 1,2,10,100,1000 given that the initial probability vector is

0 0.3 0.5 0.2 .

1 0 0.24 0.51 0.25

2 0 0.25 0.495 0.255

10

T

T T

T T

T T

n n =

=

= =

= =

=

p

p

p p P

p p P

p p [ ] [ ][ ] [ ] [ ][ ] [ ] [ ]

10

100

1000

0 0.25 0.5 0.25

100 0 0.25 0.5 0.25

1000 0 0.25 0.5 0.25

T T

T T

=

= =

= =

P

p p P

p p P

Page 19: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Discrete-Time Markov Chain Dynamics

[ ] [ ] [ ] [ ][ ] [ ]

Graph the state-probability vector vs time for three different starting probability vectors 0 1 0 0 , 0 0 1 0

and 0 0 0 1 over the

T T

T

= =

=

p p

p time range, 0 10.n≤ ≤

Page 20: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limiting State Probabilities for a Finite Markov Chain

[ ][ ]

For a finite Markov chain with an initial state-probability vector0 the , if they exist, are the elements of

the vector lim . There are three possible cases.

1. The n

n→∞

=

p limiting state probabilities

limit exists and is independent of the initial state-probability vector, 2. The limit exists but it depends on the initial state-probability vector, 3. The limit does not exist.If a finite Markov

[ ][ ]

chain with state-transition matrix and initial state-probability vector 0 has a limiting state-probability vector

lim then and is said to be .n

n Τ Τ

→∞= =

Pp

p P stationaryπ π π π

Page 21: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limiting State Probabilities for a Finite Markov Chain

[ ][ ] [ ]

If a finite Markov chain with a state-transition matrix is initialized with a stationary probability vector 0 then

for all and the stochastic process X is stationary. If, in a Markov chan n n

=

=

Pp

p

π

πin, the state probabilities are stationary, it is

said to be in .steady - state

Page 22: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limiting State Probabilities for a Finite Markov Chain

( )( )

( )( )

2 2

2 2

2

2 2

2 2

Consider again the two-state example.

11 1

Case 1. 0 2 In this case 1 and

11 1lim lim1

n n

n

n n

n n

n

n nn n

q p p

p q q p q

p q

q p p q pq pp q p qq p q

λ λ

λ λ

λ

λ λ

λ λ→∞ →∞

⎡ ⎤+ −⎢ ⎥=

+ ⎢ ⎥− +⎣ ⎦

< + < <

⎡ ⎤+ − ⎡ ⎤⎢ ⎥= = ⎢ ⎥+ +⎢ ⎥− + ⎣ ⎦⎣ ⎦

P

P

Page 23: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limiting State Probabilities for a Finite Markov Chain

( ) ( )( ) ( )

2

2

Case 2. 0 In this case 1 and

0 1 01 0 0 1

Case 3. 1 In this case 1 and

1 1 1 11 2 1 1 1 1

n

n nn

n n

p q

q pp qp q

p q

λ

λ

= = =

+⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥++ ⎣ ⎦ ⎣ ⎦

= = = −

⎡ ⎤+ − − −⎢ ⎥=⎢ ⎥− − + −⎣ ⎦

P

P

Page 24: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limiting State Probabilities for a Finite Markov Chain

State probabilities approach limit independent of initial state

State probabilities approach limit dependent on initial state

State probabilities do not approach a limit

Page 25: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Classification

State j is accessible from state i (indicated by the notation,

i→ j) if Pij n⎡⎣ ⎤⎦ > 0 for any n > 0. When state j is not

accessible from state i, that is indicated by i→ j.States i and j communicate (indicated by the notation, i ↔ j) if i→ j and j → i. A communicating class is a nonempty subset of states C such that if i ∈C then j ∈C if and only if i ↔ j.State i has a period d if d is the largest integer such that

Pii n⎡⎣ ⎤⎦ = 0 whenever n is not evenly divisible by d and if

d >1. If d = 1 then state i is aperiodic.

Page 26: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Classification

In a finite Markov chain, a state is if there exists a

state such that but

i

j i j j→ →

transient

( )

. If no such state exists then state is . If is transient, then , the number of visits to state over all time, has a finite expected value E where is a finite upper bou

i

i

i ji i Ni N BB

<recurrent

nd. A Markov chain is if there is only one communicating class.

irreducible

Page 27: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Classification

[ ] [ ]/ 2

00 11

Are states 0 and 1 periodic? It can be shown that

, even P P

0 , oddStates 0 and 1 are both transient and periodic. A signal in an LTIsystem cannot be both

np nn n

n⎧ ⎫

= = ⎨ ⎬⎩ ⎭

transient and periodic. State 2 is recurrent and aperiodic.

Page 28: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Classification

{ } { }1 2

1 2

The communicating classes are 0,1,2,3 and 4,5,6 . Class is aperiodic. Class is periodic with period 3. States 0, 1, 2 and 3 are transient. States 4, 5 and 6 are recurrent. This Marko

C CC C d

= ==

v chain is reducible.

Page 29: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Classification

{ }1 1There is one communicating class 0,1,2,3,4 . Class is periodic with period 2. All states are recurrent. This Markov chain is irreducible.

C Cd

==

Page 30: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

State Classification

{ } { }{ }

1 2

3 1

The communicating classes are 0,1,2,3,4 , 5 and

6,7,8,9 . Class is periodic with period 2. The other classes are aperiodic. States 0, 1, 2, 3, 4 and 5 are transient. States 6, 7, 8 an

C C

C C d

= =

= =

d 9 are recurrent. This Markov chain is reducible.

Page 31: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limit Theorems

{ }

[ ]

0 1

0 1

0 1

For an irreducible, aperiodic, finite Markov chain with states 0,1,2, , the limiting -step state-transition matrix is

lim

where 1 1 1

N

Nn T

n

N

N n

π π ππ π π

π π π→∞

⎡ ⎤⎢ ⎥⎢ ⎥= =⎢ ⎥⎢ ⎥⎣ ⎦

=

P 1

1

π

L

LL

M M O ML

L [ ]

[ ]

0 and is the unique

vector satisfying , 1. For an irreducible, aperiodic, finite Markov chain with state-probability transition matrix P and initial state-probability vector 0 ,

TTN

T T T

π π=

= =P 1

p

π

π π π

L

[ ] lim . n

n→∞

=p π

Page 32: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limit Theorems

0 0 1 2

1 0 1 2

2 1

ExampleCalculate the stationary state-probability vector .In steady state,

0.2 0.2 0.4 0.8 0.3 0.6

0.50.8 0.2 0.4

0.8 0.7 0.60 0.5 1

π π π ππ π π π

π π

= + += + +

=− −⎡ ⎤

⎢− −⎢−⎢⎣

π

0

1

2

000

These three equations are not linearly independent.

πππ

⎡ ⎤ ⎡ ⎤⎥ ⎢ ⎥ ⎢ ⎥=⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎢ ⎥ ⎢ ⎥⎦ ⎣ ⎦ ⎣ ⎦

Page 33: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limit Theorems

[ ]

0 1 2

0

1

2

The other equation needed is 1Then we can write

0.8 0.2 0.4 0 0.8 0.7 0.6 0

1 1 1 1

The solution is 0.25 0.5 0.25 .T

π π π

πππ

+ + =

− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Page 34: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limit Theorems

0 1

0 1

0 1

Alternatively we could use

lim

where0.2 0.8 0

0.2 0.3 0.50.4 0.6 0

N

Nn T

n

N

π π ππ π π

π π π→∞

⎡ ⎤⎢ ⎥⎢ ⎥= =⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

P 1

P

π

LL

M M O ML

Page 35: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

The diagonalized form of is0.58 0.78 0.780.58 0.44 0.19 0.44 0.190.58 0.1 0.38 0.1 0.38

1 0 0 0 0.25 0.19 0

0 0 0.25 0.19

0.43 0.87 0.43 0.48 0.45 0.32 0.58 0.16 1.

n

n

n

j jj

jj

j j j

⎡ ⎤⎢ ⎥= − + − +⎢ ⎥

− +⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥× − +⎢ ⎥

− −⎢ ⎥⎣ ⎦

× − − − − +

P

P

030.48 0.45 0.32 0.58 0.16 1.03

Two eigenvalues have a magnitude less than one.j j j

⎡ ⎤⎢ ⎥⎢ ⎥

+ − + − −⎢ ⎥⎣ ⎦

Limit Theorems

Page 36: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Limit Theorems Finding the limit as ,

0.58 0.78 0.78lim 0.58 0.44 0.19 0.44 0.19

0.58 0.1 0.38 0.1 0.38

1 0 0 0.43 0.87 0.43 0 0 0 0.48 0.45 0.32 0.58 0.16 1.03

0 0 0 0.48 0.45 0.32 0.

n

n

n

j jj

j j jj j

→∞

→∞

⎡ ⎤⎢ ⎥= − + − +⎢ ⎥

− +⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥× − − − − +⎢ ⎥

+ − +⎢ ⎥⎣ ⎦

P

[ ]

58 0.16 1.03Multiplying out the matrices,

0.25 0.5 0.25lim 0.25 0.5 0.25 0.25 0.5 0.25

0.25 0.5 0.25

T n T

n

j

→∞

⎡ ⎤⎢ ⎥⎢ ⎥

− −⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥= = ⇒ =⎢ ⎥⎢ ⎥⎣ ⎦

1 Pπ π

Page 37: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Partitioning

It can be shown that for an irreducible, aperiodic, finite Markov chain with state-transition matrix P and stationary probability vector partitioned into two disjoint state-space subsets and t

SS ′

πhat .i ij j ji

i S j S j S i SP Pπ π

′ ′∈ ∈ ∈ ∈

=∑∑ ∑∑

Page 38: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Partitioning

( )1 1

ExampleRouter with buffer size . Using

we can write

11

i ij j jii S j S j S i S

i i i i

c P P

pp pp

π π

π π π π

′ ′∈ ∈ ∈ ∈

+ +

=

= − ⇒ =−

∑∑ ∑∑

Page 39: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Partitioning

( )( )( )( )

2

1 0 2 1 0 0

1

0 0 00 0 0

0

Generalizing,

, , , 1 1 1 1

The state probabilities must sum to one. Therefore,

1 / 11

1 1 1 / 1

1

i

i

ci ic c c

ii i i

p p p pp p p p

p pp pp p p p

π π π π π π π

π π π π

π

+

= = =

⎛ ⎞ ⎛ ⎞= = = =⎜ ⎟ ⎜ ⎟− − − −⎝ ⎠ ⎝ ⎠

− −⎛ ⎞ ⎛ ⎞= = = =⎜ ⎟ ⎜ ⎟− − − −⎝ ⎠ ⎝ ⎠

=

∑ ∑ ∑

L

( )( )( )( )

( )( )( )( )1 1

/ 1 1 / 1

11 / 1 1 / 1

i

ic c

p p p p ppp p p p

π+ +

− − − − ⎛ ⎞= ⎜ ⎟−⎝ ⎠− − − −

Page 40: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

For an irreducible, recurrent, periodic, finite Markov chain with state-transition matrix P the stationary probability vector

is the unique non-negative solution of , 1 .This is the s

T T T= =P 1π π π πame formula used to compute the limiting state

probabilities for an irreducible, aperiodic finite Markov chain,but here they are called “stationary” instead of “limiting”because in a recurrent, periodic chain the probabilities don’t actually converge to a limit but instead oscillate.

Page 41: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

[ ] [ ]

[ ]

0 1 2 3 0 1 2 3

0 1 2 3

0 1 0 00 0 1 0

Example 0 0 0 11 0 0 0

0 1 0 00 0 1 00 0 0 11 0 0 0

11

111

π π π π π π π π

π π π π

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥⎢ ⎥ =⎢ ⎥⎢ ⎥⎣ ⎦

P

Page 42: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

[ ]

0

1

2

3

Combining equations,1 1 0 0 00 1 1 0 0

0 0 1 1 01 1 1 1 1

The solution is 0.25 0.25 0.25 0.25 . These are the probabilities of being in each state

T

ππππ

− ⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− ⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥−⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦⎣ ⎦=π

at a randomly chosen time.

Page 43: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

( ) ( ) [ ]

1

Let the initial state be state 0. Then

0 1 0 00 0 1 0

0 1 0 0 00 0 0 11 0 0 0

The diagonalized form of is

1 1 1 1 1 0 0 0 1 1 1 11 1 0 1 0 0 1 11 1 1 1 0 0 0 1 11 1 0 0 0

n

T T n

n

n

n

n

j j j jj

j j j

⎡ ⎤⎢ ⎥⎢ ⎥= =⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− − − − −⎢ ⎥ ⎢ ⎥= Λ =

− − −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦

p p P

P

P S S

1

1 11 1 j j

−⎡ ⎤⎢ ⎥⎢ ⎥

−⎢ ⎥⎢ ⎥− −⎣ ⎦

Page 44: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

[ ] [ ][ ] [ ][ ] [ ][ ] [ ]

2 3 4

0 0 1 0 0 0 0 1 1 0 0 00 0 0 1 1 0 0 0 0 1 0 0

1 0 0 0 0 1 0 0 0 0 1 00 1 0 0 0 0 1 0 0 0 0 1

So the state probabilities 0 1 0 0 0are period1 0 1 0 0

and . 2 0 0 1 03 0 0 0 1

T

T

T

T

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

====

P P P

pppp

M M M

ic and do not converge to a limiting

value, but they are stationary.

Page 45: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

( )1For a Markov chain with recurrent communicating classes , ,

let indicate the limiting state probabilities (for all states in the Markov chain) associated with entering class . Given that the

m

k

k

C C

L

[ ] ( ) [ ] ( ) [ ][ ]

11

system starts in a transient state the limiting probability of state is

lim P P P

where P is the conditional probability that the system enters class . (The c

mij j i j imn

ik

k

ij

n B B

BC

π π→∞

= + +L

ondition is that it started in state .) i

Page 46: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

{ }ExampleFor each possible starting state 0,1,2,3,4,5,6 find the limiting state probabilities.

i∈

Page 47: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

{ } { } { }{ }

1 2 3

4 1 2 3 4

The communicating classes are 0,1 , 3 , 4,5,6

and 2 . Classes , and are recurrent and is transient. If the initial state is either 0 or 1, the limiting state probabilities for st

C C C

C C C C C

= = =

=

( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

1 1 10 1

1 1 1 1 1 1 1 1 10 1 0 1 0 1 0 1

ates 2-6 are all zero. The analysis proceeds as though the states 2-6 did not exist.

So 0 0 0 0 0 and we can find the vector

from , 1

where

Tπ π

π π π π π π π π

⎡ ⎤= ⎣ ⎦⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦P 1

P

π

( )11 is the state-transition matrix for class alone.C

Page 48: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Periodic States and Multiple Communicating Classes

( ) [ ]( ) [ ]( ) [ ]

( ) ( ) ( ) ( ) ( ) ( )

1

2

3

1 2 321 22 23

0.5 0.2 0.3

Solving,

0.5882 0.4118 0 0 0 0 0

0 0 0 1 0 0 0

0 0 0 0 0.2941 0.4706 0.2353For the case of starting in state 2

P P P

0.29

T

T

T

T

B B B= = =

=

=

=

= + +

=

π

π

π

π π π π

π

1 2 3 1 2 3 1 2 3

[ ]41 0.2059 0 0.2 0.0882 0.1412 0.0706

Page 49: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

{ }The state space for a countably-infinite Markov chain is 0,1,2, . The state-transition matrix and state-probability vector still have the same notation, but now they both have infinite dimension. Th

L

[ ] [ ] [ ]

[ ] [ ] [ ] [ ]0

0 0

e basic relationships hold but cannot now be computed by matrix manipulation. They are

P P P

p p 0 P p 1

ij ik kjk

j i ij i iji i

n m n m

n n n P

=

∞ ∞

= =

+ =

= = −

∑ ∑[ ] lim

This last limit may or may not exist.j jn

p nπ→∞

=

Page 50: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

For this countably-infinite Markov chain, whether or not a state is recurrent depends on the parameter p.

Page 51: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

Given that a Markov chain is in state at some arbitrary time,

1. is the event, the system eventually returns to state , 2. is the number of transitions until the system first ret

ii

ii

i

V iT urns

to state ,and 3. is the number of times in the future that the system

returns to state .ii

iN

i

Page 52: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

If a system starts in state i it will return if the number of transitions Tii is finite.

P Vii( ) = P Tii < B⎡⎣ ⎤⎦ = limn→∞

FTiin⎡⎣ ⎤⎦

where B is a finite upper bound and FTiin⎡⎣ ⎤⎦ is the cumulative

distribution function (CDF) for Tii as a function of discrete time n.

For a countably-infinite Markov chain, state i is recurrent if

P Vii( ) = 1, otherwise it is transient.

Page 53: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

Example

[ ]

00

00

00

Is state 0 transient or recurrent? will be greater than if the system reaches state before

returning to state zero The probability that is greater than is

1 21 2 1P 12 3

T n nT n

nT nn

⋅−> = × × × × =L3⋅ ( )1n −L

2 3⋅ ( )1n −L1nn

=⋅

Page 54: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

[ ]

( ) [ ] ( )

The CDF for is F 1 1/ and the probability that a

return to state 0 eventually happens is

P lim F lim 1 1/ 1

An eventual return to state 0 is guaranteed and state 0

ii

ii

ii T

ii Tn n

T n n

V n n→∞ →∞

= −

= = − =

is recurrent.

Page 55: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

( )If a state is recurrent then over an infinite time the expected number of returns to that state E is infinite. If a state is transient the expected number of returns to the state is finite. The

ii

iN

( )

( ) [ ]1

refore, a state is recurrent if, and only if, E is infinite. The expected number of visits to state over all time is

E P

ii

ii iin

i Ni

N n∞

=

=∑

(not 0)

Page 56: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

Example Random Walk

Is state 0 recurrent?

Page 57: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

If we start at state 0, in order to return to state 0 after steps, must be even. Half the steps are to the right and half the steps are to the left. So we are looking for the probability in 2

n n

n m= steps ( an integer) that we take exactly to the right and to the left. This is 2 trials of an experiment (one step), exactly of them being of a certain type so the probability is

m m mm m

[ ] ( )00

2 P 1 mmm

n p pm

⎛ ⎞= −⎜ ⎟⎝ ⎠

Page 58: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

( ) [ ] [ ] [ ]

( ) ( )

( )

00 00 00 001 2 1

even

001

E P P P 2

2 E 1

We can use Stirling’s approximation to help sum this series.

! 2 / for large where is th

n n mn

mm

m

n

N n n m

mN p p

m

n n n e n eπ

∞ ∞ ∞

= = =

=

= = =

⎛ ⎞= −⎜ ⎟

⎝ ⎠

∑ ∑ ∑

e base of the natural logarithm.

Page 59: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

( )( )

[ ] ( )( ) ( )

( ) ( )

( )

( )

2

00

001

2 !! ! ! ! !

4 14 2 /P 1

2 / 2 /

1 E

where 4 1 . If = 1/2, the series diverges, the exp

mmmm

m m

m

m

n mnm m n m m m

p pm m en p p

mm m e m m e

Nm

p p p

πππ π

απ

α

=

⎛ ⎞= =⎜ ⎟ −⎝ ⎠

⎡ ⎤−⎣ ⎦≅ − =×

=

= −

∑ected

number of returns to state 0 is infinite and state 0 is recurrent. Otherwise the series converges, the expected number of returns tostate 0 is finite and state 0 is transient.

Page 60: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

If a state is recurrent it may be or . Positive recurrent means that the expected time of return to the state is finite. Null recurrent means that the expected time

positive recurrent null recurrent

to return to the state is infinite.

Probability of Expected Number Expected TimeEventual Return of Returns of First Return

Transient 1 Finite InfiniteNull Recurrent 1 Infinite Infinite

Positive Recurrent 1 Infinit

<

e Finite

Page 61: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

( ) ( ) ( ) ( )00 00 00

Example

We have already seen that state 0 is recurrent. Is it positive recurrentor null recurrent?

The probability that the time of return equals is1 1 1P P 1 P , 1

1 1

n

T n T n T n nn n n n

= = > − − > = − = >− −

Page 62: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

( ) ( )00 000 2 1

The expected time of first return is 1 1 E P

1This is a harmonic series and it diverges, indicating that the expected time of first return to state 0 is infi

n n nT n T n

n n

∞ ∞ ∞

= = =

= = = =−∑ ∑ ∑

nite and therefore that state 0 is null recurrent.

Page 63: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: State Classification

In a communicating class all states have the same recurrence qualities. They are either

1. All transient,

2. All null recurrent, or 3. All positive recurrent.

Page 64: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: Stationary Probabilities

{ }[ ] { }

For an irreducible, aperiodic, positive-recurrent Markov chain with states 0,1,2, , the limiting n-step transition probabilities are,

lim P where | 0,1,2, are the unique state

probabilitie

ij j jnn jπ π

→∞= =

L

L

0

0

s satisfying , 0,1,2,

and 1.

j i iji

jj

P jπ π

π

=

=

= =

=

L

Page 65: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: Stationary Probabilities

ExampleFind the stationary probabilities and specify for which values of p these probabilities exist.

π i p = π i+1 1− p( )⇒π i+1 =p

1− pπ i

Page 66: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Countably Infinite Chains: Stationary Probabilities

2

1 0 2 0 0

00 0

1 1 1The sum of the stationary probabilities must be one (if they exist).

11

This summation converges if <

i

i

i

ii i

p p pp p p

pp

p

π π π π π π

π π∞ ∞

= =

⎛ ⎞ ⎛ ⎞= ⇒ = ⇒ =⎜ ⎟ ⎜ ⎟− − −⎝ ⎠ ⎝ ⎠

⎛ ⎞= =⎜ ⎟−⎝ ⎠

∑ ∑1 / 2.

Page 67: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains ( ){ }

( ) ( )

( ) ( )

A continuous-time Markov chain | 0 is a continuous-time,

discrete-value random process such that for an infinitesimal time step

P X | X

P X | X 1

This implies tha

ij

ijj i

X t t

t j t i q

t i t i q≠

Δ

⎡ ⎤+Δ = = = Δ⎣ ⎦

⎡ ⎤+Δ = = = − Δ⎣ ⎦ ∑

( ) ( )t

P X | X ijj i

t i t i q≠

⎡ ⎤+Δ ≠ = = Δ⎣ ⎦ ∑

Probability of an i-j transition in time

Probability of an i-i transition in time

Probability of a transition out of state i in time

Δ

Δ

Δ

Page 68: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

Since is very small, the product is also small making the

probability of a transition in any one time slot small. Thus formost of these small time slots nothing happens. Only rarelydoes a t

ijqΔ Δ

Δ

ransition occur.

Page 69: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

For a Markov chain in state the time until the next transition is an exponential random variable with parameter,

called the of state . For an exponential random variable, t

i ijj i

iv q

i≠

=∑departure ratehe waiting time until the next transition is independent

of all previous waiting times. That is, no matter how long we have waited without a transition, the expected time of the next transition is still the same 1/ . The time between transitions is random withan exponential PDF.

iv

Page 70: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

( )( )

For a continuous-time Markov chain, when the system enters state a Poisson process with rate parameter starts for every

other state . If the process happens to be the first to have ik ik

ij

iN t q

k N tan arrival, then the system transitions to state . When the system is in state the time until departure is an exponential random variable with expected time of transition 1/ . Given the event i i

ji

v D

( ) ( )( )

0 0

that the system departs state in the time interval the conditional probability of the event that the system transitioned

Pto state is P | . If we ignore the

P

ij

ij ij ijij i

i i i

i t t tD

D q qj D D

D v v

< ≤ + Δ

Δ= = =

Δ time

spent in each state, the transition probabilities can be viewed as the transition probabilities of a discrete-time Markov chain.

Page 71: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains For a continuous-time Markov chain with transition rates and

state- departure rates the has transition probabilities / for states with 0 and

ij

i

ij ij i i

i

qi

P q v iP

νν= >

embedded discrete - time Markov chain

1 for states with 0. The communicating classes of a continuous-time Markov chain are given by the communicating classes of its embedded discrete-time Markov chain. A continuous-time Markov ch

i ii ν= =

ain is irreducible if its embedded discrete-time Markov chain is irreducible. An irreducible continuous-time Markov chain is

if, for all states the time to return to state iii T ipositive recurrent

( )satisfies E where is a finite upper bound. iiT B B<

Page 72: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains Example In the summer, an air conditioner is in one of three states: (0) off, (1) low and (2) high. While in the off state, transitions to the low state occur after an exponential time with an expected value of 3 minutes. While in the low state, transitions to the off state or the high state are equally likely and transitions from the low state occur at a rate of 0.5 per minute. When the system is in its high state, it makes a transition to the low state with a probability of 2/3 or to the off state with a probability of 1/3. The time spent in the high state is an exponential random variable with an expected value of 2 minutes. Model this air conditioning system using a continuous-time Markov chain.

Page 73: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains 01

02

1

The transition rate from state 0 to state 1 is 1/3 per minute and the transition rate from state 0 to state 2 is zero. The transition rate out of state 1 is 0.5 per minute. The fact that

qq

ν

10 1 12 1 10 12

21 2 20 2 2

20 212 20 2 2

transitions from 1 to 0 and 1 to 2 are equally likely means that

/ / 0.5. Therefore 0.5 0.5 0.25. / 2 / 3 / 1/ 3 1/ 2

1 1 1/ 3 3 6 2 / 3

q q q qq q

q qq

ν νν ν ν

ν ν ν

= = = = × == = =

= ⇒ = × = = ⇒ 21 22 1 3 3

q ν= × =

Page 74: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains In a continuous-time Markov chain the process is characterized by the transition rates rather than the transition probabilities. The transition rate from a state to itself is taken as zero because in that transition that nothing has actually changed. The state transition matrix is

01 0

10 1

1 2

00

0

N

N

N N

q qq q

q q

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

Q

LL

M M O ML

Page 75: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

0 01 0

10 1 1

1 2

Another matrix, the is also useful.

Since , each row of must sum to zero.

N

N

N N N

i ijj i

q qq q

q q

q

νν

ν

ν≠

−⎡ ⎤⎢ ⎥−⎢ ⎥=⎢ ⎥⎢ ⎥−⎣ ⎦

=∑

rate matrix R

R

R

LL

M M O ML

Page 76: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

The probability that the system is in state i is pi t( ) = P X t( ) = i⎡⎣ ⎤⎦.

If the number of states is finite the state probabilities can be written

in matrix form as p t( ) = p0 t( ) p1 t( ) pN t( )⎡⎣⎢

⎤⎦⎥

T

.The system is

characterized by a system of differential equations and, in the case of

a finite number of states, by one vector differential equation in p t( ). The evolution of state probabilities over time is governed byddt

p j t( )( ) = rij pi t( )all i∑ , j = 0,1,2, or

ddt

pT t( )( ) = pT t( )R

Page 77: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

ddt

p j t( )( ) = rij pi t( )all i∑

Rate of increase of the probability of being in state j

Probability of being in state i

Rate of transitions from state i to state j (one element of the rate matrix R)

Page 78: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

The vector form of the equations can be solved and the solution

form is pT t( ) = pT 0( )eRt where eRt =Rt( )k

k!k=0

∑ is known as the

matrix exponential. Usually, in practice, the desired solution is the steady-state solution for t →∞.

Page 79: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

all

all

For an irreducible, positive-recurrent, continuous-time Markov chain, the state probabilities satisfy

0 or

and 1 or

T Tij i

i

Tj

j

r p

p

= =

=

p R 0

p 1

{rate of

transitions rate ofout of state transitionsinto state

1.

In steady state, .j j i iji j

jj

p p qν≠

=

=∑1 2 3

Page 80: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains ExampleIn a continuous-time ON-OFF process, alternating OFF and ON (states 0 and 1) periods have independent exponential durations. The average ON period lasts 1/ seconds while the average OFF per

µiod lasts 1/ seconds. Draw the diagram and find the limiting

state probabilities.λ

Transition rate, not transition probability

Page 81: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

[ ]

[ ]

0 1

0 1

0

1

0

0

0

1 1 1

1Combining equations,

0

1 1 1Solving,

01 ,1 1

T T

T

p p

p p

pp

p

λ λµ µ

λ µ

µ µλ µ λ µ

−⎡ ⎤ ⎡ ⎤= ⇒ =⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦

⎡ ⎤= ⇒ =⎢ ⎥

⎣ ⎦

− ⎡ ⎤⎡ ⎤ ⎡ ⎤=⎢ ⎥⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦⎣ ⎦

= =− − +

p R 0

p 1

1

01 1 1

pλ λ

λ µ λ µ−

= =− − +

Page 82: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

Example

Find the stationary distribution for the Markov chain describing the air conditioning system.

Page 83: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Continuous-Time Markov Chains

[ ] [ ]

[ ]

010 1 2 01 2

0 1 2

01

1/ 3 1/ 3 0 01/ 4 1/ 2 1/ 4 0

01/ 6 1/ 3 1/ 2 0

1 1 1 1

1Combining equations and solving,

T T T

T

T

p p p

p p p

−⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥= ⇒ − = ⇒ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦−⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

⎡ ⎤⎢ ⎥= ⇒ =⎢ ⎥⎢ ⎥⎣ ⎦

0p R 0 p R R

p 1

R1

[ ]

001

1

2

1/ 3 1/ 4 1/ 6 01/ 3 1/ 2 1/ 3 0

11 1 1 1

2 / 5 2 / 5 1/ 5

T

T

ppp

−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ⇒ − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

=

0p

p

Page 84: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

, 1

, 1

A continuous-time Markov chain is a birth-death process if the transition rates satisfy 0 for 1.

The transition rate is called the service rate and the transition rate is called

ij

i i

i i

q i j

qq

+

= − >

the arrival rate. We will assume that 0 for all states i that are reachable from state 0. This ensures that the chain is irreducible.

iµ >

Page 85: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

1 10

1

For a birth-death queue with arrival rates and service rates

the stationary probabilities satisfy and 1.

Define the as . The limiting state probabilities

i i

i i i i ii

ii

i

p p p

λ µ

λ µ

λρµ

− −=

+

= =

=

load

110

1 1 01 0

,

if they exist, are and they exist if 1

converges.

ij kj

i jk k jjk j

ρρ

−−∞=

−∞ = =

= =

=+

∏ ∑ ∏∑ ∏

Page 86: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

1

0 i

jjρ−

=∏ Load on state j Product of loads on states 0 through i - 1

1

1 0 k

jk jρ−∞

= =∑ ∏

Product of loads on states 0 through k - 1

Load on state j

Sum of products of loads on states 0 through k - 1, for all k > 0

Page 87: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

Naming convention for queues / / / for process, for process, for

and for For positions and in the queue name,

means the process is a

A S n mA S n

mA S

M

arrival service number ofservers number of customers

Poisson process and memoryless means the process is deterministic with a uniform rate means the process is a general process

When the number of customers in the system is less than the number of serv

DG

ers then an arriving customer is immediately assigned to a server. When is finite and the queue has customers in it, new arrivals are blocked (discarded). If is not specified, it is assumed

nm m

mto be infinite.

Page 88: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

The / /1 QueueThe arrivals are a Poisson process with rate parameter independent of the service requirements of the customers. The service rate is an exponential random variable (because the ex

M Mλ

ponential random variable is memoryless) independent of the service rate of the system. There is only one server in the system so the departure rate from any state 0 is . The limiting

state proii µ µ> =

( )babilities are 1 , 0,1,2,nnp nρ ρ= − = L

Page 89: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

ExampleCars arrive at a toll booth as a Poisson process at a rate of 0.6 cars per minute. The rate at which they are serviced is an exponential random variable with an expected value of 0.3 minutes. What are the limiting state probabilities for N, the number of cars at the toll booth? What is the probability that there are no cars at the toll booth at any randomly-chosen time in the future?

ρ = ( ) ( )

00

0.6 0.18 1 0.18 0.18 0.82 0.181/ 0.3

The probability that there are no cars at the toll booth is 0.82 0.18 0.82

n nnp

p

λµ= = = − = ×

= × =

Page 90: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

The / / QueueWith infinitely many servers, each customer is served immediately upon arrival. When customers are in the system the system departure rate is although the service rate (which appl

M M

nnµ

ies to individual customers) is still . µ

Page 91: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

The / / queue with arrival rate and service rate has limiting state probabilities

, 0,1,2, !0 , otherwise

n

n

M M

e np n

ρ

λ µ

ρ −

⎧=⎪= ⎨

⎪⎩

L

Page 92: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

ExampleAt the beach in the summer, swimmers enter the ocean as a Poisson process with a rate parameter of 300 swimmers per hour. The time spent in the ocean by a randomly-chosen swimmer is an exponen

100

tial random variable with an expected value of 20 minutes. Find the limiting state probabilities of the number of swimmers in the ocean.

100 , 0,1,2,300 / hour 100 !3 / hour 0

n

n

e np nλρµ

== = = = L

, otherwise

⎧⎪⎨⎪⎩

Page 93: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

The average number of swimmers in the ocean is the expected value of n,

E n( ) = npnn=0

∑ = n100n e−100

n!n=0

∑ = e−100 n100n

n!n=0

∑ = e−100 n100n

n!n=1

∑Using the infinite-series definition of the exponential function,

ex = xn / n!( )n=0

∞∑ , we get

E n( ) = 100e−100 100n−1

n−1( )!n=1

∑ = 100e−100 100n

n!n=0

∑ = 100e−100e100 = 100

Page 94: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

The / / / QueueThis queue has servers and a capacity for customers in the system. Customers arrive as a Poisson process with an arrival rate

. If there are 1 or fewer customers being served

M M c cc c

cλ − , an arriving customer is immediately served. If there are customers being served an arriving customer is turned away and denied service (it never enters the system). The service time of a custome

c

r admitted to the system is an exponential random variable with an expected value of .µ

Page 95: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

0

The limiting state probabilities satisfy

/ ! , 0,1,2,/ !

0 , otherwiseSince this queue can only accommodate customers, the probability th

n

c jn j

n jjp

c

ρρ

=

⎧=⎪

= ⎨⎪⎩

∑L

0

at a customer will not be served is the same as the probability that there are customers currently being served which is

/ ! / !

c

c c jj

ccpj

ρρ

=

=∑

Page 96: Markov Chains - UTKweb.eecs.utk.edu/~roberts/ECE504/PresentationSlides/Markov... · Discrete-Time Markov Chains A discrete-time Markov chain is a discrete-time , discrete-value random

Birth-Death Processes and Queueing Systems

ExampleA telephone exchange can handle up to 100 calls simultaneously. Call duration is an exponential random variable with an expected value of 2 minutes. If requests for connection occur as a Pois

( )

10

son random variable with a rate of 40 calls per minute, what is the probability that a caller will get a busy signal?

40 80 1001/ 2

The probability of a busy signal is

c

p

λρµ

= = = =

100

0 100

0

80 /100! .80 / !

The answer is 0.004.

jj

j=

=∑

nnnnnnnnnnnnnn

nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn