section 10 discrete time markov chains
DESCRIPTION
hwTRANSCRIPT
-
Discrete-Time Markov
Chains
Professor Izhak Rubin
Electrical Engineering Department
UCLA
2014-2015 by Izhak Rubin
-
Prof. Izhak Rubin 2
Discrete-Time Markov Chain:
Definition X = {Xk, k = 0,1,2,} is a discrete time stochastic process; states assume values in a
countable state space S, such as S = {0,1,2.}, or S = {0,1,2, N), or S = {a1, a2, a3,.}. It is said to be a Markov Chain if it satisfies the Markov Property:
k+1 0 0 k-1 k-1 k k+1 k k
Markov Property (Given present and past states, distribution of future states
is independent of the pa :
P(X = j| X = i ,..., X = i , X = i) = P(X = j| X = i ) = P (i,j),
for e
st)
ach t 0 1
1
ime 1,and states ( , , ,... ) .
Assume a time homogeneous process: its statistical behavior is charaterized
by the (stationary) transition probability function (TPF)
( , ) ( , ) ( | )
k
k k k
k i j i i S
P i j P i j P X j X i
1 0( | ), i,j S,k 1.
Transition Probability Matrix:
{ ( , ), , }.T
P X j X i
P P i j i j S
Xk
k 0 1 2 3 4 5
X2
X3
X4
-
Prof. Izhak Rubin 3
Transition Probability Function
(TPF): Properties
Properties of PT
Initial Distribution:
Calculation of joint state distribution:
1. ( , ) 0,each , ;
2. ( , ) 1,each .j S
P i j i j S
P i j i S
SiiXPiP ),()( 00
0 0 1 1 0 0 1 0
1
( , ,..., ) ( ) ( , ), for 1, ( ,..., )k
k k j j k
j
P X i X i X i P i P i i k i i S
-
Prof. Izhak Rubin 4
Example 1: Two State Markov Chain
X= DTMC with binary RVs on state space S={0,1}
Transition probability function is given by:
1
0 1; 0 11
P
-
Prof. Izhak Rubin 5
Example 2 Binomial Counting Process; Geometric Point Process
th
1
0
No. of arrivals during the k slot
0 discrete-time counting process
where
No arrivals in first k slots, 1, 2,3...
0
Assume :{M , 1} - i.i.d. RVs, with
1 , 0( )
, 1
Then N is a
k
k,
k
k i
i
k
k
M
N {N k ) .
N M k
N :
k
p jP M j
p j
DT Markov Chain with
1-p if j=iP(i,j) =
p if j=i+1
-
Prof. Izhak Rubin 6
Example 2 (Cont.) Binomial Counting Process and Geometric Point Process
1 1 1 2 2
1
1 1 2 2 2
1 1
1
Markov Property holds:
| , ,...,
| , ,...,
1 , ( , ) is a DT MC
, 1
Associated discrete time point process { ,
k k
k k
i i
i i
k
n
P N j N n N n N i
P M j M n M M n M j
P i M j
p j iP i j N
p j i
A A n
0
1
1
0}, 0
time (slot) of n-th occurence.
, 1
(1 ) ; Hence, A = DT renewal point process with intervals
that are Geometrically distributed = Geometric Point Process
Distribution of
n
n n n
i
n
A
A
T A A n
P T i p p
the counting variable is Binomial:
1 , 0,1,...,k nn
k
kP N n p p n k
n
-
Prof. Izhak Rubin 7
Transient State Analysis
0
1
1 0
1 0 0
1
State distribution at time k:
Define the m-step TPF:
, |
Compute , recursively:
, , |
| , | , , .
,
k k
m
m
m
m
m m
l S
m
m m m
l S l S
m
P j P X j
P i j P X j X i
P i j
P i j P X j X l X i
P X j X i X l P X l X i P i l P l j
P i
0
, , . used to recursively compute the m-step TPF.
We can compute the state distribution at time k by using the k-step TPF:
, .
Alternatively,
m
l S
k
k
i S
j P i l P l j
P j P i P i j
0
1
starting with a given initial distribution , we can proceed recursively:
, , k=0,1,2,....; j S.
Note: , , , . each m 1, n 1. (Chapman
k k
i S
m n m n
l S
P i
P j P i P i j
P i j P i l P l j
-Kolmogorov Equation)
-
Prof. Izhak Rubin 8
Transient State Analysis:
Two State Markov Chain
1
1
1
1 1
Example: Two State discretere-time Markov Chain, X, with state space S={0,1}.
Use: , , k 0,
to obtain
0 0 1 1
1 0 1 1 .
Normalization condition:
0 1 1.
Hence:
k k
i S
k k k
k k k
k k
k
P j P i P i j
P P P
P P P
P P
P
1
0
0 0 1 .
By iteration, we obtain:
0 0 1 , 1 1 0
Note: As : 0 , 1 .
k
k
k k k
k k
P
P P P P
k P P
-
Prof. Izhak Rubin 9
Steady State Distribution
1
Under certain conditions, the DT MC will have the limiting (steady state)
distribution:
lim , ,
for any i ;such that 1; 0.
We can write
lim lim , lim
n
n
j S
k k kk k k
i S
P j P i j j S
S P j P j
P j P i P i j P i P
,
leading to the following set of linear equations:
, , (1.1)
1 (1.2)
If above set of linear equa
i S
i S
j S
i j
P j P i P i j j S
P j
tions has a unique solution , , it is said to be the stationary distribution of the Markov Chain.
If the above limits exist, yielding the chain's steady state distribution,
the later is equal
j P j j S
to the stationary distribution: , .P j j j S
-
Prof. Izhak Rubin 10
Example
Consider a DTMC X over the state space S = {0,1,2} with TPF:
0.2 0.3 0.5
P = 0.4 0.2 0.4
0.6 0.3 0.1
The stationary distribution , is obtained by solving
j P j j S
, , (1.1)
1 (1.2)
also written in matrix form
, | | 1 (2)
where
i S
j S
P j P j P i j j S
P j
P
{ ( ), } is a row vector and | | .j S
P i i S P j
-
Prof. Izhak Rubin 11
Example (Cont.) For this example we write:
P(0)=0.2P(0)+0.4P(1)+0.6P(2) (1)
P(1)=0.3P(0)+0.2P(1)+0.3P(2) (2)
P(2)=0.5P(0)+0.4P(1)+0.1P(2) (3)
1=P(0)+P(1)+P(2) (4)
One of Eqs. (1) - (3) is redundant (these equations are linearly
dependent) and is not used.
We obtain the solution:
P(0)=30/77; P(1)=3/11; P(2)=26/77.
-
Prof. Izhak Rubin 12
Example: Discrete-Time
Birth & Death Markov Chain
kA discrete-time Markov chain X={X ,k 0}
over the state space S={0,1,2,...,}
is said to be a Discrete Time Birth-and-Death
(DTBD) process if its TPF is given by
, for 1, 0
, ,
i
i
j i i
P i j
0 0
for 1, 1
1 , for , 0
0, otherwise
where 0; 0; 1: 0; 0; and 1 for i 0.
= (admitted) arrival intensity at state i
= departure intensity from stat
i i
i i i i
i
i
j i i
j i i
i
e i.
Xk
k
i
i+1 i+1 i
-
Prof. Izhak Rubin 13
DTBD: Stationary Distribution
0 1
1 1
1 0
1 1
The set of equations for the stationary distribution becomes
0 0 1 1 ;
1 1 1 , 1.
Rearranging, we obtain the balance equations
1 0 0;
1 1 , 1.
j j j j
j j j j
P P P
P j P j P j P j j
P P
P j P j P j P j j
1
Hence,
1 0, 0.j jP j P j j
-
Prof. Izhak Rubin 14
DT Birth & Death MC: Stationary
Distribution (Cont.)
0 1 1
0
1 2
0 0
0
Define: 1; , 1.
We conclude: P(j) = P(0) , j 0. To compute P(0), we use the normalization
condition:
1 ( ) (0) .
If , We can compute
j
j
j
j
j
j j
j
j
a a j
a
P j P a
a
0
P(0), so that the process is ergodic (positive recurrent),
and a unique stationary distribution P = {P(j), j S} exists; it is given by:
, 0 .j
i
i
aP j j
a
-
Prof. Izhak Rubin 15
Limiting Probabilities
0
In turn, if , no stationary distribution exists;
the process is non-ergodic.
We conclude then that (when the limit exists) and the process
is non-ergodic, we have:
lim 0, 0
j
j
kk
a
P X j j
.
For a DTBD process, when 1 for some state i,
we observe the process to be aperiodic.
Then, if the process is also ergodic and thus has a stationary
distribution, given above, it also has a stea
i i
dy state distribution,
so that
lim j 0.kk
P X j P j
-
Prof. Izhak Rubin 16
Finite State DT Birth and Death
Markov Chain
kA discrete-time Markov chain X={X ,k 0} over the state space S={0,1,2,...N}
is said to be a finite state Discrete Time Birth-and-Death (DTBD) process if its TPF is given by
, for
,
i j i
P i j
0
1, 0
, for 1, 1
1 , for , 0
0, otherwise
where 0; 0; : 0; 0; 1.
The set of equations for the stationary distribution are
written as do
i
i i
N i i i i
N i
j i i
j i i
otherwise
1
ne for the infinite state case,
yielding the same recursive formula,
yet limited for states in S:
1 0, N-1 0.j jP j P j j
Xk
k
N
0
-
Prof. Izhak Rubin 17
Finite State DTBD: Stationary
Distribution
0 1 1
0
1 2
0
Define: 1; , 0 .
Since now we always have that , the process is always ergodic (positive recurrent),
and a unique stationary distribution P = {P(j), j S
j
j
j
N
j
j
a a j
a
0
} always exists; it is given by:
, 0 .
For a DTBD process, when 1 for some state i, we observe the process to
be aperiodic. Then, it also has
j
N
i
i
i i
aP j N j
a
a steady state distribution, so that
lim , 0.kk
P X j P j N j