hidden markov model
DESCRIPTION
Hidden Markov Model. Observation : O1,O2, . . . States in time : q1, q2, . . . All states : s1, s2, . . ., sN. Sj. Si. Hidden Markov Model (Cont’d). Discrete Markov Model. Degree 1 Markov Model. Hidden Markov Model (Cont’d). : Transition Probability from Si to Sj , . - PowerPoint PPT PresentationTRANSCRIPT
1
Hidden Markov Model
Observation : O1,O2, . . .
States in time : q1, q2, . . .
All states : s1, s2, . . ., sN
tOOOO ,,,, 321
tqqqq ,,,, 321
Si Sjjiaija
2
Hidden Markov Model (Cont’d)
Discrete Markov Model
)|(
),,,|(
1
121
itjt
zktitjt
sqsqP
sqsqsqsqP
Degree 1 Markov Model
3
Hidden Markov Model (Cont’d)
)|( 1 itjtij sqsqPa
ija : Transition Probability from Si to Sj ,
Nji ,1
4
Discrete Markov Model Example
S1 : The weather is rainyS2 : The weather is cloudyS3 : The weather is sunny
8.01.01.02.06.02.03.03.04.0
}{ ijaA
rainy cloudy sunnyrainycloudy
sunny
5
Hidden Markov Model Example (Cont’d)
Question 1:How much is this probability:Sunny-Sunny-Sunny-Rainy-Rainy-Sunny-Cloudy-Cloudy
22311333 ssssssss
22321311313333 aaaaaaa
87654321 qqqqqqqq410536.1
6
Hidden Markov Model Example (Cont’d)
Question 2:The probability of staying in state Si for d days if we are in state Si?
NisqP ii 1),( 1The probability of being in state i in time t=1
)()1()( 1 dPaassssP iiidiiijiii
d Days
7
Discrete Density HMM Components
N : Number Of StatesM : Number Of OutputsA (NxN) : State Transition Probability MatrixB (NxM): Output Occurrence Probability in each state (1xN): Initial State Probability
),,( BA : Set of HMM Parameters
8
Three Basic HMM ProblemsRecognition Problem:
Given an HMM and a sequence of observations O,what is the probability ? State Decoding Problem:
Given a model and a sequence of observations O, what is the most likely state sequence in the model that produced the observations?Training Problem:
Given a model and a sequence of observations O, how should we adjust model parameters in order to maximize ?
)|( OP
)|( OP
9
First Problem Solution
)(),|(),|(11 tq
T
ttt
T
tObqOPqOP
t
TT qqqqqqq aaaqP132211
)|(
)()|(),( yPyxPyxP )|(),|()|,( zyPzyxPzyxP
We Know That:
And
10
First Problem Solution (Cont’d)
)|(),|()|,( qPqOPqOP
)()()()|,(
122111 21 Tqqqqqqqq ObaObaObqOP
TTT
T
TTTqqq
Tqqqqqqqq
q
ObaObaOb
qOPOP
21
122111)()()(
)|,()|(
21
Computation Order : )2( TTNO
11
Forward Backward Approach
)|,,,,()( 21 iqOOOPi ttt
NiObi ii 1),()( 11
Computing )(it
1) Initialization
12
Forward Backward Approach (Cont’d)
NjTt
Obaij tjij
N
itt
1,11
)(])([)( 11
1 2) Induction :
3) Termination :
N
iT iOP
1
)()|(
Computation Order : )( 2TNO
13
Backward Variable
),|,,,()( 21 iqOOOPi tTttt
NiiT 1,1)(1) Initialization
2)Induction
NiTTt
jObaiN
jttjijt
1 and 1,,2,1
)()()(1
11
14
Second Problem SolutionFinding the most likely state sequence
N
itt
ttN
it
t
ttt
ii
ii
iqOP
iqOPOP
iqOPOiqPi
11
)()(
)()(
)|,(
)|,()|(
)|,(),|()(
Individually most likely state :Ttiq t
it 1)],([maxarg*
15
Viterbi Algorithm
Define :
Ni
OOOiqqqqP
i
tttqqq
t
t
1
]|,,,,,,,,[max
)(
21121,,, 121
P is the most likely state sequence with this conditions : state i , time t and observation o
16
Viterbi Algorithm (Cont’d)
)(].)(max[)( 11 tjijtit Obaij 1) Initialization
0)(1),()(
1
11
iNiObi ii
)(it Is the most likely state before state i at time t-1
17
Viterbi Algorithm (Cont’d)
NjTt
aij
Obaij
ijtNi
t
tjijtNit
1,2
])([maxarg)(
)(])([max)(
11
11
2) Recursion
18
Viterbi Algorithm (Cont’d)
)]([maxarg
)]([max
1
*
1
*
iq
ip
TNi
T
TNi
3) Termination:
4)Backtracking:
1,,2,1),( *11
* TTtqq ttt
19
Third Problem SolutionParameters Estimation using Baum-Welch Or Expectation Maximization (EM) Approach
Define:
N
i
N
jttjijt
ttjijt
tt
ttt
jObai
jObaiOP
jqiqOPOjqiqPji
1 111
11
1
1
)()()(
)()()()|(
)|,,(),|,(),(
20
Third Problem Solution (Cont’d)
N
jtt jii
1
),()(
1
1
)(T
tt i
T
tt ji
1
),(
: Expected value of the number of jumps from state i
: Expected value of the number of jumps from state i to state j
21
Third Problem Solution (Cont’d)
)(1 ii
1
1
1
)(
),(
T
tt
T
tt
ij
i
jia
T
tt
Vo
T
tt
j
j
j
kb kt
1
1
)(
)(
)(
22
Baum Auxiliary Function
q
qOPqOPQ )|,(log)'|,()|( '
)'|()|()|()|(: '
OPOPQQif
By this approach we will reach to a local optimum
23
Restrictions Of Reestimation Formulas
11
N
ii
NiaN
jij
1,11
NjkbM
kj
1,1)(1
24
Continuous Observation Density
We have amounts of a PDF instead of
We have
)|()( jqVOPkb tktj
1)(,),,()(1
ttj
M
kjkjktjktj dOObOCOb
Mixture Coefficients
Average Variance
25
Continuous Observation Density
Mixture in HMM
),,()( jkjktjkktj OCMaxOb
M2|1M1|1
M4|1M3|1
M2|3M1|3
M4|3M3|3
M2|2M1|2
M4|2M3|2
S1 S2 S3Dominant Mixture:
26
Continuous Observation Density (Cont’d)
Model Parameters:
),,,,( CA
N×N N×M×K×KN×M×KN×M1×N
N : Number Of StatesM : Number Of Mixtures In Each StateK : Dimension Of Observation Vector
27
Continuous Observation Density (Cont’d)
T
t
M
kt
T
tt
jk
kj
kjC
1 1
1
),(
),(
T
tt
t
T
tt
jkkj
okj
1
1
),(
),(
28
Continuous Observation Density (Cont’d)
T
tt
jktjkt
T
tt
jk
kj
ookj
1
1
),(
)()(),(
),( kjt Probability of event j’th state and k’th mixture at time t
29
State Duration Modeling
Si Sj
Probability of staying d times in state i :
)1()( 1ii
diii aadP
jia
ija
30
State Duration Modeling (Cont’d)
Si Sjjia
……. …….
HMM With clear duration
ija )(dPj)(dPi
31
State Duration Modeling (Cont’d)
HMM consideration with State Duration :– Selecting using ‘s– Selecting using– Selecting Observation Sequence
using in practice we assume the following
independence:
– Selecting next state using transition probabilities . We also have an additional constraint:
),(),,,(1
1
11 121 tq
d
tdq OtbOOOb
iiq 1
dOOO ,,, 21 )(
1dPq1d
21qqa
),,,(11 21 dq OOOb
jq 2
011qqa
32
Training In HMM
Maximum Likelihood (ML)
Maximum Mutual Information (MMI)
Minimum Discrimination Information (MDI)
33
Training In HMM
Maximum Likelihood (ML)
)|( 1oP)|( 2oP)|( 3oP
)|( noP
.
.
.
)]|([*V
rOPMaximumP
ObservationSequence
34
Training In HMM (Cont’d)
Maximum Mutual Information (MMI)
)()()|,(log),(
POPOPOI
v
ww
v
wPwOP
OPOI
1
)(),|(log
)|(log),(
Mutual Information
}{, v
35
Training In HMM (Cont’d)Minimum Discrimination Information (MDI)
dooPoqoqPQI )|(
)(log)():(
),,,( 21 TOOOO
),,,( 21 tRRRR
Observation :
Auto correlation :
):(inf),( PQIPR )(RQ