l23: hidden markov models - texas a&m...
TRANSCRIPT
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 1
L23: hidden Markov models
β’ Discrete Markov processes
β’ Hidden Markov models
β’ Forward and Backward procedures
β’ The Viterbi algorithm
This lecture is based on [Rabiner and Juang, 1993]
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 2
Introduction β’ The next two lectures in the course deal with the recognition of
temporal or sequential patterns β Sequential pattern recognition is a relevant problem in several disciplines
β’ Human-computer interaction: Speech recognition β’ Bioengineering: ECG and EEG analysis β’ Robotics: mobile robot navigation β’ Bioinformatics: DNA base sequence alignment
β’ A number of approaches can be used to perform time series analysis β Tap delay lines can be used to form a feature vector that captures the
behavior of the signal during a fixed time window β’ This represents a form of βshort-termβ memory β’ This simple approach is, however, limited by the finite length of the delay line
β Feedback connections can be used to produce recurrent MLP models β’ Global feedback allows the model to have βlong-termβ memory capabilities β’ Training and using recurrent networks is, however, rather involved and outside
the scope of this class (refer to [Principe et al., 2000; Haykin, 1999])
β Instead, we will focus on hidden Markov models, a statistical approach that has become the βgold standardβ for time series analysis
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 3
Discrete Markov Processes
β’ Consider a system described by the following process β At any given time, the system can be in one of π possible states π = π1, π2β¦ππ
β At regular times, the system undergoes a transition to a new state
β Transition between states can be described probabilistically
β’ Markov property β In general, the probability that the system is in state ππ‘ = ππ is a
function of the complete history of the system
β To simplify the analysis, however, we will assume that the state of the system depends only on its immediate past
π ππ‘ = ππ|ππ‘β1 = ππ , ππ‘β2 = ππ β¦ = π ππ‘ = ππ|ππ‘β1 = ππ
β This is known as a first-order Markov Process
β We will also assume that the transition probability between any two states is independent of time
πππ = π ππ‘ = ππ|ππ‘β1 = ππ π . π‘. πππ β₯ 0
πππππ=1 = 1
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 4
β’ Example β Consider a simple three-state Markov model of the weather
β Any given day, the weather can be described as being
β’ State 1: precipitation (rain or snow)
β’ State 2: cloudy
β’ State 3: sunny
β Transitions between states are described by the transition matrix
π΄ = πππ =0.4 0.3 0.30.2 0.6 0.20.1 0.1 0.8
S1
S2
S3
0 .8
0 .4 0 .6
0 .3
0 .2
0 .1
0 .20 .1
0 .3
S1
S2
S3
0 .8
0 .4 0 .6
0 .3
0 .2
0 .1
0 .20 .1
0 .3
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 5
β Question
β’ Given that the weather on day t=1 is sunny, what is the probability that the weather for the next 7 days will be βsun, sun, rain, rain, sun, clouds, sunβ ?
β’ Answer:
π π3, π3, π3, π1, π1, π3, π2, π3|πππππ= π π3 π π3|π3 π π3|π3 π π1|π3 π π1|π1 π π3|π1 π π2|π3 π π3|π2= π3π33π33π13π11π31π23π32= 1 Γ 0.8 Γ 0.8 Γ 0.1 Γ 0.4 Γ 0.3 Γ 0.1 Γ 0.2
β Question
β’ What is the probability that the weather stays in the same known state Si for exactly T consecutive days?
β’ Answer:
π ππ‘ = ππ , ππ‘+1 = ππ β¦ππ‘+π = ππβ π = ππππβ1 1 β πππ
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 6
Hidden Markov models
β’ Introduction β The previous model assumes that each state can be uniquely
associated with an observable event
β’ Once an observation is made, the state of the system is then trivially retrieved
β’ This model, however, is too restrictive to be of practical use for most realistic problems
β To make the model more flexible, we will assume that the outcomes or observations of the model are a probabilistic function of each state
β’ Each state can produce a number of outputs according to a unique probability distribution, and each distinct output can potentially be generated at any state
β’ These are known a Hidden Markov Models (HMM), because the state sequence is not directly observable, it can only be approximated from the sequence of observations produced by the system
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 7
β’ The coin-toss problem β To illustrate the concept of an HMM, consider the following scenario
β’ You are placed in a room with a curtain
β’ Behind the curtain there is a person performing a coin-toss experiment
β’ This person selects one of several coins, and tosses it: heads (H) or tails (T)
β’ She tells you the outcome (H,T), but not which coin was used each time
β Your goal is to build a probabilistic model that best explains a sequence of observations π = π1, π2, π3β¦ = π», π, π, π» β¦
β’ The coins represent the states; these are hidden because you do not know which coin was tossed each time
β’ The outcome of each toss represents an observation
β’ A βlikelyβ sequence of coins may be inferred from the observations, but this state sequence will not be unique
β If the coins are hidden, how many states should the HMM have?
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 8
β One-coin model β’ In this case, we assume that the person behind
the curtain only has one coin
β’ As a result, the Markov model is observable since there is only one state
β’ In fact, we may describe the system with a deterministic model where the states are the actual observations (see figure)
β’ In either case, the model parameter P(H) may be found from the ratio of heads and tails
β Two-coin model β’ A more sophisticated HMM would be to
assume that there are two coins β Each coin (state) has its own distribution of
heads and tails, to model the fact that the coins may be biased
β Transitions between the two states model the random process used by the person behind the curtain to select one of the coins
β’ The model has 4 free parameters
[Rabiner, 1989]
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 9
β Three-coin model
β’ In this case, the model would have three separate states
β This HMM can be interpreted in a similar fashion as the two-coin model
β’ The model has 9 free parameters
β Which of these models is best?
β’ Since the states are not observable, the best we can do is select the model that best explains the data (e.g., using a Maximum Likelihood criterion)
β’ Whether the observation sequence is long and rich enough to warrant a more complex model is a different story, though
[Rabiner, 1989]
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 10
β’ The urn-ball problem β To further illustrate the concept of an HMM, consider this scenario
β’ You are placed in the same room with a curtain
β’ Behind the curtain there are N urns, each containing a large number of balls from M different colors
β’ The person behind the curtain selects an urn according to an internal random process, then randomly grabs a ball from the selected urn
β’ He shows you the ball, and places it back in the urn
β’ This process is repeated over and over
β Questions
β’ How would you represent this experiment with an HMM? What are the states? Why are the states hidden? What are the observations?
Urn 1 Urn 2 Urn N
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 11
β’ Elements of an HMM β An HMM is characterized by the following set of parameters
β’ π, the number of states in the model π = π1, π2β¦ππ
β’ π, the number of discrete observation symbols π = π£1, π£2β¦π£π
β’ π΄ = πππ , the state transition probability
πππ = π ππ‘+1 = ππ|ππ‘ = ππ
β’ π΅ = ππ π , the observation or emission probability distribution
ππ π = π ππ‘ = π£π|ππ‘ = ππ
β’ π, the initial state distribution
ππ = π π1 = ππ
β Therefore, an HMM is specified by two scalars (π and π) and three probability distributions (π΄,π΅, and π)
β’ In what follows, we will represent an HMM by the compact notation π = π΄, π΅, π
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 12
β’ HMM generation of observation sequences β Given a completely specified HMM π = π΄, π΅, π , how can an
observation sequence π = {π1, π2, π3, π4, β¦ } be generated?
1. Choose an initial state π1 according to the initial state distribution π
2. Set π‘ = 1
3. Generate observation ππ‘ according to the emission probability ππ(π)
4. Move to a new state ππ‘+1according to state-transition at that state πππ
5. Set π‘ = π‘ + 1 and return to 3 until π‘ β₯ π
β Example
β’ Generate an observation sequence with π = 5 for a coin tossing experiment with three coins and the following probabilities
πΊπ πΊπ πΊπ
π· π― 0.5 0.75 0.25π· π» 0.5 0.25 0.75
π΄ = πππ =1
3βπ, π π = ππ =
1
3 βπ
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 13
β’ The three basic HMM problems β Problem 1: Probability Evaluation
β’ Given observation sequence π = π1, π2, π3β¦ and model π = π΄, π΅, π , how do we efficiently compute π π|π , the likelihood of the observation sequence given the model?
β The solution is given by the Forward and Backward procedures
β Problem 2: Optimal State Sequence
β’ Given observation sequence π = π1, π2, π3β¦ and model π, how do we choose a state sequence π = π1, π2, π3β¦ that is optimal (i.e., best explains the data)?
β The solution is provided by the Viterbi algorithm
β Problem 3: Parameter Estimation
β’ How do we adjust the parameters of the model π = π΄, π΅, π to maximize the likelihood π π|π
β The solution is given by the Baum-Welch re-estimation procedure
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 14
Forward and Backward procedures
β’ Problem 1: Probability Evaluation β Our goal is to compute the likelihood of an observation sequence π = π1, π2, π3β¦ given a particular HMM model π = π΄, π΅, π
β Computation of this probability involves enumerating every possible state sequence and evaluating the corresponding probability
π π|π = π π|π, π π π|π
βπ
β For a particular state sequence π = π1, π2, π3β¦ , π π|π, π is
π π|π, π = π ππ‘|ππ‘, π =π
π‘=1 πππ‘ ππ‘
π
π‘=1
β The probability of the state sequence π is π π|π = ππ1ππ1π2ππ2π3 β¦πππβ1ππ
β Merging these results, we obtain
π π|π = ππ1ππ1 ππ1 ππ1π2ππ2 ππ2 β¦πππβ1πππππ ππππ1,π2β¦ππ
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 15
β Computational complexity
β’ With ππpossible state sequences, this approach becomes unfeasible even for small problemsβ¦ sound familiar?
β For π = 5 and π = 100, the order of computations is in the order of 1072
β’ Fortunately, the computation of π π|π has a lattice (or trellis) structure, which lends itself to a very efficient implementation known as the Forward procedure
[Rabiner, 1989]
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 16
β’ The Forward procedure β Consider the following variable πΌπ‘ π defined as
πΌπ‘ π = π π1, π2β¦ππ‘, ππ‘ = ππ|π
β’ which represents the probability of the observation sequence up to time π‘ AND the state ππ at time π‘, given model π
β Computation of this variable can be efficiently performed by induction
β’ Initialization: πΌ1 π = ππππ π1
β’ Induction: πΌπ‘+1 π = πΌπ‘ π πππππ=1 ππ ππ‘+1
1 β€ π‘ β€ T β 1 1 β€ π β€ π
β’ Termination: π π|π = πΌπ πππ=1
β’ As a result, computation of π π|π can be reduced from 2π Γ ππ down to π2 Γ T operations (from 1072 to 3000 for π = 5, π = 100)
[Rabiner, 1989]
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 17
β’ The Backward procedure
β Analogously, consider the backward variable π½π‘ π defined as
π½π‘ π = π ππ‘+1, ππ‘+2β¦ππ|ππ‘ = ππ , π
β π½π‘ π represents the probability of the partial observation sequence from π‘ + 1 to the end, given state ππ at time π‘ and model π
β’ As before, π½π‘ π can be computed through induction
β’ Initialization: π½π π = 1 (arbitrarily)
β’ Induction: π½π‘ π = πππππ ππ‘+1 π½π‘+1 πππ=1
π‘ = π β 1, π β 2β¦11 β€ π β€ π
β Similarly, this computation can be effectively performed in the order
of π2 Γ π operations
[Rabiner, 1989]
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 18
The Viterbi algorithm
β’ Problem 2: Optimal State Sequence β Finding the optimal state sequence is more difficult problem that the
estimation of π π|π
β Part of the issue has to do with defining an optimality measure, since several criteria are possible
β’ Finding the states ππ‘that are individually more likely at each time π‘
β’ Finding the single best state sequence path (i.e., maximize the posterior π π|π, π
β The second criterion is the most widely used, and leads to the well-known Viterbi algorithm
β’ However, we first optimize the first criterion as it allows us to define a variable that will be used later in the solution of Problem 3
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 19
β As in the Forward-Backward procedures, we define a variable πΎπ‘ π
πΎπ‘ π = π ππ‘ = ππ|π, π
β’ which represents the probability of being in state ππ at time π‘, given the observation sequence π and model
β Using the definition of conditional probability, we can write
πΎπ‘ π = π ππ‘ = ππ|π, π =π π, ππ‘ = ππ|π
π π|π=
π π, ππ‘ = ππ|π
π π, ππ‘ = ππ|πππ=1
β Now, the numerator of πΎπ‘ π is equal to the product of πΌπ‘ π and π½π‘ π
πΎπ‘ π =π π, ππ‘ = ππ|π
π π, ππ‘ = ππ|πππ=1
=πΌπ‘ π π½π‘ π
πΌπ‘ π π½π‘ πππ=1
β The individually most likely state ππ‘β at each time is then
ππ‘β = arg max
1β€πβ€ππΎπ‘ π βπ‘ = 1β¦π
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 20
β The problem with choosing the individually most likely states is that the overall state sequence may not be valid
β’ Consider a situation where the individually most likely states are ππ‘ = ππ and ππ‘+1 = ππ, but the transition probability πππ = 0
β Instead, and to avoid this problem, it is common to look for the single best state sequence, at the expense of having sub-optimal individual states
β This is accomplished with the Viterbi algorithm
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 21
β’ The Viterbi algorithm β To find the single best state sequence we define yet another variable
πΏπ‘ π = maxπ1π2β¦ππ‘β1
π π1π2β¦ππ‘ = ππ , π1π2β¦ππ‘|π
β’ which represents the highest probability along a single path that accounts for the first π‘ observations and ends at state ππ
β By induction, πΏπ‘+1 π can be computed as
πΏπ‘+1 π = maxπ
πΏπ‘ π πππ ππ ππ‘+1
β To retrieve the state sequence, we also need to keep track of the state that maximizes πΏπ‘ π at each time π‘, which is done by constructing an array
Ξ¨π‘+1 π = arg max1β€πβ€π
πΏπ‘ π πππ
β’ Ξ¨π‘+1 π is the state at time π‘ from which a transition to state ππ maximizes the probability πΏπ‘+1 π
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 22
1 2 3 4 5 6 7 8 9 1 0
S1
S2
S3
S4
T im e
t= 5
(S4)= S
2M o s t lik e ly s ta te s e q u e n c e
1 2 3 4 5 6 7 8 9 1 0
S1
S2
S3
S4
T im e
t= 5
(S4)= S
2M o s t lik e ly s ta te s e q u e n c e
Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 23
β The Viterbi algorithm for finding the optimal state sequence becomes
β’ Initialization: πΏ1 π = ππππ π1 1 β€ π β€ π
Ξ¨1 π = 0 no previous states
β’ Recursion: πΏπ‘ π = max
1β€πβ€π πΏπ‘β1 π πππ ππ ππ‘
Ξ¨π‘ π = arg max1β€πβ€π
πΏπ‘β1 π πππ 2 β€ π‘ β€ π; 1 β€ π β€ π
β’ Termination: πβ = max
1β€πβ€π πΏπ π
ππβ = arg max
1β€πβ€π πΏπ π
β And the optimal state sequence can be retrieved by backtracking
ππ‘β = Ξ¨π‘+1 ππ‘+1
β π‘ = π β 1, π β 2β¦1
β Notice that the Viterbi algorithm is similar to the Forward procedure, except that it uses a maximization over previous states instead of a summation
iΞ΄t
jΞ΄1t
[Rabiner, 1989]