hidden markov modelsigcf/tabc/hmm.pdf · an introduction to bioinformatics algorithms hidden markov...

80
www.bioalgorithms.info An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de Pernambuco Material Original: www.ioalgorithms.info

Upload: others

Post on 14-Aug-2020

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

www.bioalgorithms.infoAn Introduction to Bioinformatics Algorithms

Hidden Markov Models

Ivan Gesteira Costa FilhoCentro de InformaticaUniversidade Federal de PernambucoMaterial Original: www.ioalgorithms.info

Page 2: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Outline

• CG-islandsCG-islands• The “Fair Bet Casino”The “Fair Bet Casino”• Hidden Markov ModelHidden Markov Model• Decoding AlgorithmDecoding Algorithm• Forward-Backward AlgorithmForward-Backward Algorithm• HMM Parameter EstimationHMM Parameter Estimation• Viterbi training• Baum-Welch algorithm

Page 3: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

CG-Islands

• Given 4 nucleotides: probability of occurrence Given 4 nucleotides: probability of occurrence is ~ 1/4. Thus, probability of occurrence of a is ~ 1/4. Thus, probability of occurrence of a dinucleotide is ~ 1/16.dinucleotide is ~ 1/16.

• However, the frequencies of dinucleotides in However, the frequencies of dinucleotides in DNA sequences vary widely.DNA sequences vary widely.

• In particular, In particular, CG CG is typically underepresented is typically underepresented (frequency of (frequency of CG CG is typically < 1/16)is typically < 1/16)

Page 4: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Why CG-Islands?

• CGCG is the least frequent dinucleotide because is the least frequent dinucleotide because CC in in CG CG is easily is easily methylated and methylated and has the has the tendency to mutate into T afterwardstendency to mutate into T afterwards

• However, the methylation is suppressed However, the methylation is suppressed around genes in a genome. So, around genes in a genome. So, CGCG appears appears at relatively high frequency within these at relatively high frequency within these CGCG islandsislands

• So, finding the So, finding the CGCG islands in a genome is an islands in a genome is an important problemimportant problem

Page 5: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Markov Chains

• Given a sequenceGiven a sequence

x = x = AGCTCTC...T AGCTCTC...T• and transition probabilities and transition probabilities

aaAGAG==pp((xxii=G|=G|xxi-1i-1=A)=A)

• What isWhat is p(x) p(x) ??

Page 6: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Markov Chains

pp(x)=(x)=pp((xxLL,,xxL-1L-1,,xxL-2L-2,...,,...,xx11))

==pp((xxLL||xxL-1L-1,,xxL-2L-2,...,,...,xx11).).pp((xxL-1L-1||xxL-2L-2,...,,...,xx11)...)...pp((xx11))

• Markov Assumption Markov Assumption

p(xp(xii|x|xi-1i-1,x,xi-2i-2,...,x,...,x

11) = p(x) = p(xii|x|xi-1i-1))

• thenthen

pp(x)=(x)=pp((xxLL||xxL-1L-1).).pp((xxL-1L-1||xxL-2L-2)...)...pp((xx11))

=p=p((xx11))ΠΠi=1i=1aanxxiixxi-1i-1

Page 7: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Markov Chains – Start State

• Given a sequence Given a sequence

x = x = bbAGCTCTC...TAGCTCTC...Tee• Transition probabilities Transition probabilities

aaAGAG==pp((xxii=G|=G|xxi-1i-1=A)=A)

• pp(x) =(x) =pp((xx11))ΠΠi=1i=1aa

• pp((xx11)=1)=1

• p(xp(x22|x|x11)=1/4)=1/4

xxiixxi-1i-1

n

Page 8: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Using Markov Model for CG discrimination• Use curated sequences of CG islands

(non-CG islands) to build two markov models (model+ and model-)

• Estimate ast =c

st / ΣΣt't' cst'

Page 9: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Markov Model for CG discrimination• For a given sequence x

calculate log (p(x|model+)/ p(x|model-))

non-cpg islands cpg islands

Page 10: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

What if we don't known any CG islands?

Page 11: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

CG Islands and the “Fair Bet Casino”

• The The CGCG islands problem can be modeled after islands problem can be modeled after a problem named a problem named “The Fair Bet Casino”“The Fair Bet Casino”

• The game is to flip coins, which results in only The game is to flip coins, which results in only two possible outcomes: two possible outcomes: HHead or ead or TTail.ail.

• The The FFair coin will give air coin will give HHeads and eads and TTails with ails with same probability ½.same probability ½.

• The The BBiased coin will give iased coin will give HHeads with prob. ¾.eads with prob. ¾.

Page 12: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The “Fair Bet Casino” (cont’d)

• Thus, we define the probabilities:Thus, we define the probabilities:• P(H|F) = P(T|F) = ½P(H|F) = P(T|F) = ½• P(H|B) = ¾, P(T|B) = ¼P(H|B) = ¾, P(T|B) = ¼• The crooked dealer changes between Fair The crooked dealer changes between Fair

and Biased coins with probability 10%and Biased coins with probability 10%

Page 13: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Fair Bet Casino Problem

• Input:Input: A sequence A sequence x = xx = x11xx22xx33…x…xnn of coin of coin

tosses made by two possible coins (tosses made by two possible coins (FF or or BB).).

• Output:Output: A sequence A sequence π = ππ = π11 π π22 π π33… π… πnn, with , with

each each ππii being either being either F F or or BB indicating that indicating that xxii

is the result of tossing the Fair or Biased is the result of tossing the Fair or Biased coin respectively.coin respectively.

Page 14: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Problem…

Fair Bet Casino Fair Bet Casino ProblemProblem

Any observed Any observed outcome of coin outcome of coin tosses could have tosses could have been generated by been generated by any sequence of any sequence of dices dices ππ!!

Need to incorporate a way to grade different sequences differently.

Decoding ProblemDecoding Problem

Find sequence Find sequence π with π with maximum probabilitymaximum probability

Page 15: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Hidden Markov Model (HMM)

• Can be viewed as an abstract machine with Can be viewed as an abstract machine with k hidden k hidden states that emits symbols from an alphabet states that emits symbols from an alphabet ΣΣ..

• Each state has its own probability distribution, and the Each state has its own probability distribution, and the machine switches between states according to this machine switches between states according to this probability distribution.probability distribution.

• While in a certain state, the machine makes 2 While in a certain state, the machine makes 2 decisions:decisions:• What state should I move to next?What state should I move to next?• What symbol - from the alphabet What symbol - from the alphabet ΣΣ - should I emit? - should I emit?

Page 16: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM for Fair Bet Casino (cont’d)

HMM model for the HMM model for the Fair Bet Casino Fair Bet Casino ProblemProblem

Page 17: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Why “Hidden”?

• Observers can see the emitted symbols of an Observers can see the emitted symbols of an HMM but have HMM but have no ability to know which state no ability to know which state the HMM is currently inthe HMM is currently in..

• Thus, the goal is to infer the most likely Thus, the goal is to infer the most likely hidden states of an HMM based on the given hidden states of an HMM based on the given sequence of emitted symbols.sequence of emitted symbols.

Page 18: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM Parameters

ΣΣ: set of emission characters.: set of emission characters.

Ex.: Σ = {H, T} for coin tossingEx.: Σ = {H, T} for coin tossing

Σ = {1, 2, 3, 4, 5, 6} for dice tossingΣ = {1, 2, 3, 4, 5, 6} for dice tossing

QQ: set of hidden states, each emitting symbols : set of hidden states, each emitting symbols from Σ.from Σ.

Q={F,B} for coin tossingQ={F,B} for coin tossing

Page 19: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM Parameters (cont’d)

A = (aA = (aklkl):): a |Q| x |Q| matrix of probability of a |Q| x |Q| matrix of probability of

changing from state changing from state k k to state to state l.l.

aaFFFF = 0.9 a = 0.9 aFBFB = 0.1 = 0.1

aaBFBF = 0.1 a = 0.1 aBBBB = 0.9 = 0.9

E = (eE = (ekk((bb)):)): a |Q| x | a |Q| x |Σ| matrix of probability of Σ| matrix of probability of

emitting symbol emitting symbol bb while being in state while being in state k.k.

eeFF(0) = ½ e(0) = ½ eFF(1) = ½ (1) = ½

eeBB(0) = ¼ e(0) = ¼ eBB(1) = ¾ (1) = ¾

Page 20: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM for Fair Bet Casino• The The Fair Bet CasinoFair Bet Casino in in HMM HMM terms:terms:

Σ = {0, 1} (Σ = {0, 1} (00 for for TTails and ails and 11 HHeads)eads)

Q = {Q = {F,BF,B} – } – FF for Fair & for Fair & BB for Biased coin. for Biased coin.• Transition Probabilities Transition Probabilities A *** A *** Emission Probabilities Emission Probabilities EE

Fair Biased

Fair aaFFFF = 0.9 = 0.9 aaFBFB = 0.1 = 0.1

Biased aaBFBF = 0.1 = 0.1 aaBBBB = 0.9 = 0.9

Tails(0) Heads(1)

Fair eeFF(0) = ½ (0) = ½ eeFF(1) = ½(1) = ½

Biased eeBB(0) = ¼(0) = ¼ eeBB(1) = ¾(1) = ¾

Page 21: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM for Fair Bet Casino (cont’d)

HMM model for the HMM model for the Fair Bet Casino Fair Bet Casino ProblemProblem

Page 22: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Hidden Paths• A A pathpath π = ππ = π11… π… πnn in the HMMin the HMM is defined as a is defined as a

sequence of states.sequence of states.• Consider path Consider path π π = = FFFBBBBBFF and sequence FFFBBBBBFF and sequence x x = =

01011101000101110100

X 0 1 0 1 1 1 0 1 0 0

π π = F F F B B B B B F F= F F F B B B B B F FP(P(xxii|π|πii)) ½ ½ ½ ¾ ¾ ¾ ½ ½ ½ ¾ ¾ ¾ ¼¼ ¾ ½ ½ ½ ¾ ½ ½ ½

P(πP(πi-1 i-1 ππii)) ½ ½ 99//1010 99//10 10 11//10 10

99//10 10 99//10 10

99//10 10 99//10 10

11//10 10 99//10 10

99//1010

Transition probability from state ππi-1 i-1 to state πto state πii

Probability that xi was emitted from state ππii

Page 23: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

• P(P(x,x,ππ):): Probability that sequence Probability that sequence xx was was generated by the path generated by the path π:π:

nn

P(P(x,x,ππ) = ) = ΠΠi=1i=1 P P(x(xii| π| πii) ) · · P(P(ππi i → π→ πi+1i+1) )

= Π= Πi=1i=1 ee ππii (x (xii) ) ·· aa ππii, , ππi+1i+1

P(x,π) Calculation

n

Page 24: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem

• Goal:Goal: Find an optimal hidden path of states Find an optimal hidden path of states given observations.given observations.

• Input:Input: Sequence of observations Sequence of observations x = xx = x11…x…xnn

generated by an HMM generated by an HMM MM((ΣΣ, Q, A, E, Q, A, E))

• Output:Output: A path that maximizes A path that maximizes P(x,P(x,ππ)) over over all all possible paths possible paths π.π.

Page 25: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Building Manhattan for Building Manhattan for Decoding Problem• Andrew Viterbi used the Manhattan grid Andrew Viterbi used the Manhattan grid

model to solve the model to solve the Decoding ProblemDecoding Problem..

• Every choice of Every choice of π = ππ = π11… π… πn n corresponds to a corresponds to a

path in the graph.path in the graph.

• The only valid direction in the graph is The only valid direction in the graph is eastward.eastward.

• This graph has This graph has QQ22(n-1)(n-1) edges. edges.

Page 26: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Edit Graph for Decoding Problem

observationsobservations

Page 27: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem vs. Alignment Problem

Valid directions in the Valid directions in the alignment problem.alignment problem.

Valid directions in the Valid directions in the decoding problem.decoding problem.

Page 28: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem as Finding a Longest Path in a DAG• The The Decoding ProblemDecoding Problem is reduced to finding is reduced to finding

a longest path in the a longest path in the directed acyclic graph directed acyclic graph (DAG)(DAG) above. above.

• Notes:Notes: the length of the path is defined as the length of the path is defined as the the productproduct of its edges’ weights, not the of its edges’ weights, not the sum.sum.

Page 29: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem (cont’d)

• Every path in the graph has the probability Every path in the graph has the probability P(x,P(x,ππ))..

• The Viterbi algorithm finds the path that The Viterbi algorithm finds the path that maximizes maximizes P(x,P(x,ππ) among all possible paths.) among all possible paths.

• The Viterbi algorithm runs in The Viterbi algorithm runs in O(n|Q|O(n|Q|22)) time.time.

Page 30: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w is given by:

???

(k, i) (l, i+1)

Page 31: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w is given by:

??

(k, i) (l, i+1)

nn

P(P(x,x,ππ) =) = Π Π ee ππi+1i+1 (x (xi+1i+1)) . a. a ππii, , ππi+1i+1

i=0i=0

Page 32: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w is given by:

?

(k, i) (l, i+1)

ii-th term -th term == ee ππi+1i+1 (x (xi+1i+1)) . a. a ππii, , ππi+1i+1

Page 33: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w=el(xi+1). akl

(k, i) (l, i+1)

ii-th term -th term == el(xi+1). akl for ππi i =k, π=k, πi+1i+1==ll

Page 34: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem and Dynamic Programming

ssl,i+1l,i+1 = = maxmaxkk Є QЄ Q { {ssk,ik,i · · weight of edge betweenweight of edge between (k,i) (k,i) andand (l,i+1) (l,i+1)}}

= max= maxkk Є QЄ Q { {ssk,ik,i · a · aklkl · e · ell (x (xi+1i+1))}}

= = eell (x (xi+1i+1) · ) · maxmaxkk Є QЄ Q { {ssk,ik,i · a · aklkl}}

Page 35: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Decoding Problem (cont’d)

• Initialization:Initialization:• ssbegin,0begin,0 = 1= 1

• ssk,0k,0 = 0 for = 0 for k ≠ begink ≠ begin..

• Let Let ππ** be the optimal path. Then,be the optimal path. Then,

P(P(x,x,ππ**) = max) = maxkk Є QЄ Q { {ssk,nk,n . . aak,endk,end}}

Page 36: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Viterbi Algorithm

• The value of the product can become The value of the product can become extremely small, which leads to overflowing.extremely small, which leads to overflowing.

• To avoid overflowing, use log value instead. To avoid overflowing, use log value instead.

ssk,i+1k,i+1= log= logel(xi+1) + max kk Є QЄ Q {{ssk,i k,i ++ log(akl)}}

Page 37: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Forward-Backward Problem

Given:Given: a sequence of coin tosses generated a sequence of coin tosses generated by an HMM by an HMM..

Goal:Goal: find the probability that the dealer was find the probability that the dealer was using a biased coin at a particular time.using a biased coin at a particular time.

Page 38: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Forward Algorithm• Define Define ffk,ik,i ((forward probabilityforward probability) as the ) as the

probability of emitting the prefix probability of emitting the prefix xx11…x…xii and and

reaching the state reaching the state ππ = k = k ..– p(p(xx11…x…xi, i, ππ = k) = k)

• The recurrence for the forward algorithm:The recurrence for the forward algorithm:

ffk,ik,i = = eekk(x(xii)) . . ΣΣ ffl,i-l,i-11 . a . alklk l Є Ql Є Q

Page 39: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Backward Algorithm• However, However, forward probability forward probability is not the only is not the only

factor affecting factor affecting P(P(ππii = k|x = k|x))..

• The sequence of transitions and emissions The sequence of transitions and emissions that the HMM undergoes between that the HMM undergoes between ππi+1i+1 and and ππnn

also affect also affect P(P(ππii = k|x = k|x))..

forward xi backward

Page 40: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Backward Algorithm (cont’d)

• DefineDefine backward probability backward probability bbk,ik,i as the as the

probability of being in state probability of being in state ππii = k = k and emitting and emitting

the the suffixsuffix xxi+1i+1…x…xnn..

• The recurrence for the The recurrence for the backward algorithmbackward algorithm::

bbk,ik,i = = ΣΣ eell(x(xi+1i+1)) . . bbl,i+1l,i+1 . a . akl kl l Є Ql Є Q

Page 41: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Backward-Forward Algorithm

• The probability that the dealer used a The probability that the dealer used a biased coin at any moment biased coin at any moment ii::

P(x, P(x, ππii = k = k) f) fkk(i) . b(i) . bkk(i)(i) P(P(ππii = k|x = k|x) = ) = _______________ _______________ = = ____________________________

P(x) P(x)P(x) P(x)

P(x) is the sum of P(x, πP(x, πii = k) = k) over all k

Page 42: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Example: Fair dice

P(P(ππii = “fair”|x = “fair”|x) ) for a given sequence of dices xfor a given sequence of dices x

Page 43: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM Parameter Estimation

• So far, we have assumed that the transition So far, we have assumed that the transition and emission probabilities are known.and emission probabilities are known.

• However, in most HMM applications, the However, in most HMM applications, the probabilities are not known. It’s very hard to probabilities are not known. It’s very hard to estimate the probabilities.estimate the probabilities.

Page 44: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

HMM Parameter Estimation Problem

Given HMM with states and alphabet (emission

characters) Independent training sequences x1, … xm

Find HMM parameters Θ (that is, akl, ek(b)) that maximize

P(x1, …, xm | Θ)

the joint probability of the training sequences.

Page 45: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Maximize the likelihoodP(x1, …, xm | Θ) as a function of Θ is called the

likelihood of the model.

The training sequences are assumed independent, therefore

P(x1, …, xm | Θ) = Πi P(xi | Θ)

The parameter estimation problem seeks Θ that realizes

In practice the log likelihood is computed to avoid underflow errors

∏ ΘΘ i

ixP )|(max

Page 46: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Two situations

Known paths for training sequences

CpG islands marked on training sequences

One evening the casino dealer allows us to see when he changes dice

Unknown paths

CpG islands are not marked

Do not see when the casino dealer changes dice

Page 47: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Known pathsAkl = # of times each k → l is taken in the training

sequences

Ek(b) = # of times b is emitted from state k in the training sequences

Compute akl and ek(b) as maximum likelihood estimators:

∑∑

=

=

'

''

)'(/)()(

/

bkkk

lklklkl

bEbEbe

AAa

Page 48: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Pseudocounts Some state k may not appear in any of the training

sequences. This means Akl = 0 for every state l and akl cannot be computed with the given equation.

To avoid this overfitting use predetermined pseudocounts rkl and rk(b).

Akl = # of transitions k→l + rkl

Ek(b) = # of emissions of b from k + rk(b)

The pseudocounts reflect our prior biases about the probability values.

Page 49: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Unknown paths: Viterbi training

Idea: use Viterbi decoding to compute the most probable path for training sequence xStart with some guess for initial parameters and compute π* the most probable path for x using initial parameters.Iterate until no change in π* :

1. Determine Akl and Ek(b) as before

2. Compute new parameters akl and ek(b) using the same formulas as before

3. Compute new π* for x and the current parameters

Page 50: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Viterbi training analysis

The algorithm converges preciselyThere are finitely many possible paths.New parameters are uniquely determined by the current π*.There may be several paths for x with the same probability, hence must compare the new π* with all previous paths having highest probability.

Does not maximize the likelihood Πx P(x | Θ) but the contribution to the likelihood of the most probable path Πx P(x | Θ, π*)

In general performs less well than Baum-Welch

Page 51: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Unknown paths: Baum-Welch

Idea:

1. Guess initial values for parameters.

art and experience, not science

1. Estimate new (better) values for parameters.

how ?

1. Repeat until stopping criteria is met.

what criteria ?

Page 52: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Better values for parameters

Would need the Akl and Ek(b) values but cannot count (the path is unknown) and do not want to use a most probable path.

For all states k,l, symbol b and training sequence x

Compute Akl and Ek(b) as expected values, given the current parameters

Page 53: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Notation

For any sequence of characters x emitted along some unknown path π, denote by πi = k the assumption that the state at position i (in which xi is emitted) is k.

Page 54: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Probabilistic setting for Ak,l

Given x1, … ,xm consider a discrete probability space with elementary events

εk,l, = “k → l is taken in x1, …, xm ”

For each x in {x1,…,xm} and each position i in x let Yx,i be a random variable defined by

Define Y = Σx Σi Yx,i random var that counts # of times the event εk,l happens in x1,…,xm.

==

= +

otherwise

landkifY ii

lkix ,0

,1)( 1

,,

ππε

Page 55: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The meaning of Akl

Let Akl be the expectation of Y

E(Y) = Σx Σi E(Yx,i) = Σx Σi P(Yx,i = 1) =

ΣxΣi P({εk,l | πi = k and πi+1 = l}) =

ΣxΣi P(πi = k, πi+1 = l | x)

Need to compute P(πi = k, πi+1 = l | x)

Page 56: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Probabilistic setting for Ek(b)Given x1, … ,xm consider a discrete probability

space with elementary events

εk,b = “b is emitted in state k in x1, … ,xm ”

For each x in {x1,…,xm} and each position i in x let Yx,i be a random variable defined by

Define Y = Σx Σi Yx,i random var that counts # of times the event εk,b happens in x1,…,xm.

==

=otherwise

kandbxifY ii

bkix ,0

,1)( ,,

πε

Page 57: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The meaning of Ek(b)

Let Ek(b) be the expectation of Y

E(Y) = Σx Σi E(Yx,i) = Σx Σi P(Yx,i = 1) =

ΣxΣi P({εk,b | xi = b and πi = k})

Need to compute P(πi = k | x)

∑ ∑∑ ∑==

====x bxi

ix bxi

iibk

ii

xkPkbxP}|{}|{

, )|(}),|({ ππε

Page 58: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Computing new parameters

Consider x = x1…xn training sequence

Concentrate on positions i and i+1

Use the forward-backward values:

fki = P(x1 … xi , πi = k)

bki = P(xi+1 … xn | πi = k)

Page 59: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Compute Akl (1)

Prob k → l is taken at position i of x

P(πi = k, πi+1 = l | x1…xn) = P(x, πi = k, πi+1 = l) / P(x)

Compute P(x) using either forward or backward values

We’ll show that P(x, πi = k, πi+1 = l) = bli+1 ·el(xi+1) ·akl ·fki

Expected # times k → l is used in training sequences

Akl = Σx Σi (bli+1 ·el(xi+1) ·akl ·fki) / P(x)

Page 60: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Compute Akl (2)

P(x, πi = k, πi+1 = l) =

P(x1…xi, πi = k, πi+1 = l, xi+1…xn) =

P(πi+1 = l, xi+1…xn | x1…xi, πi = k)·P(x1…xi,πi =k)=

P(πi+1 = l, xi+1…xn | πi = k)·fki =

P(xi+1…xn | πi = k, πi+1 = l)·P(πi+1 = l | πi = k)·fki =

P(xi+1…xn | πi+1 = l)·akl ·fki =

P(xi+2…xn | xi+1, πi+1 = l) · P(xi+1 | πi+1 = l) ·akl ·fki =

P(xi+2…xn | πi+1 = l) ·el(xi+1) ·akl ·fki =

bli+1 ·el(xi+1) ·akl ·fki

Page 61: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Compute Ek(b)

Prob xi of x is emitted in state k

P(πi = k | x1…xn) = P(πi = k, x1…xn)/P(x)

P(πi = k, x1…xn) = P(x1…xi,πi = k,xi+1…xn) =

P(xi+1…xn | x1…xi,πi = k) · P(x1…xi,πi = k) =

P(xi+1…xn | πi = k) · fki = bki · fki

Expected # times b is emitted in state k

( )∑ ∑=

⋅=x bxi

kikik

i

xPbfbE:

)()(

Page 62: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Finally, new parameters

Can add pseudocounts as before.

∑∑

=

=

'

''

)'(/)()(

/

bkkk

lklklkl

bEbEbe

AAa

Page 63: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Stopping criteria

Cannot actually reach maximum (optimization of continuous functions)

Therefore need stopping criteriaCompute the log likelihood of the model for current Θ

Compare with previous log likelihoodStop if small differenceStop after a certain number of iterations

∑ Θx

xP )|(log

Page 64: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Baum-Welch algorithmInitialization:

Pick the best-guess for model parameters(or arbitrary)

Iteration:1. Forward for each x2. Backward for each x3. Calculate Akl, Ek(b)

4. Calculate new akl, ek(b)

5. Calculate new log-likelihoodUntil log-likelihood does not change much

Page 65: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Baum-Welch analysis

Log-likelihood is increased by iterations

Baum-Welch is a particular case of the EM (expectation maximization) algorithm

Convergence to local maximum. Choice of initial parameters determines local maximum to which the algorithm converges

Page 66: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Log-likelihood is increased by iterations

The relative entropy of two distributions P,Q

H(P||Q) = Σi P(xi) log (P(xi)/Q(xi))

Property:

H(P||Q) is positive

H(P||Q) = 0 iff P(xi) = Q(xi) for all i

Proof of property based on

f(x) = x - 1 - log x is positive

f(x) = 0 iff x = 1 (except when log2)

Page 67: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

Log likelihood is log P(x | Θ) = log Σπ P(x,π | Θ)

P(x,π | Θ) = P(π |x,Θ) P(x | Θ)

Assume Θt are the current parameters.

Choose Θt+1 such that

log P(x | Θt+1) greater than log P(x | Θt)

log P(x | Θ) = log P(x,π | Θ) - log P(π |x,Θ)

log P(x | Θ) = Σπ P(π |x,Θt) log P(x,π | Θ) -

Σπ P(π |x,Θt) log P(π | x,Θ)because Σπ P(π |x,Θt) = 1

Page 68: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

Notation:

Q(Θ | Θt) = Σπ P(π |x,Θt) log P(x,π | Θ)

Show that Θt+1 that maximizes log P(x | Θ) may be chosen to be some Θ that

maximizes Q(Θ | Θt)

log P(x | Θ) - log P(x | Θt) =

Q(Θ | Θt) - Q(Θt | Θt) +

Σπ P(π |x,Θt) log (P(π |x,Θt) / P(π |x,Θ))

The sum is positive (relative entropy)

Page 69: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

Conclusion:

log P(x | Θ) - log P(x | Θt) greater than

Q(Θ | Θt) - Q(Θt | Θt)

with equality only when

Θ = Θt or when

P(π |x,Θt) = P(π |x,Θ) for some Θ not = Θt

Page 70: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

For an HMM

P(x,π | Θ) = a0,π1 Πi=1,|x| eπi(xi) aπi,πi+1

Let

Akl(π) = # times k→l appears in this product

Ek(b,π) = # times emission of b from k appears in this product

The product is function of Θ but Akl(π), Ek(b,π) do not depend on Θ

Page 71: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

Write the product using all the factors

ek(b) to the power Ek(b, π)

akl to the power Akl(π)

Then replace the product in

Q(Θ | Θt) =

Σπ P(π |x,Θt) (Σk=1,M Σb Ek(b, π) log ek(b) + Σk=0,M Σl=1,M Akl(π) log akl )

Page 72: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

Remember Akl and Ek(b) computed by the Baum-Welch alg at every iteration.

Consider those computed at iteration t (based on Θt)

Then Akl = Σπ P(π |x,Θt) Akl(π)

Ek(b) = Σπ P(π |x,Θt) Ek(b, π)

as expectations of Akl(π), resp. Ek(b, π) over P(π |x,Θt)

Page 73: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Proof cont’d

Then

Q(Θ | Θt) = Σk=1,M Σb Ek(b) log ek(b) + Σk=0,M Σl=1,M Akl log akl

(changing order of summations)

Note that Θ consists of {akl} and {ek(b)}.

The algorithm computes Θt+1 to consist of

Akl / Σl’ Akl’ and Ek(b) / Σb’ Ek(b’)

Show that this Θt+1 maximizes Q(Θ | Θt)

(compute the differences for the A part and for the E part)

Page 74: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Speech Recognition

• Create an Create an HMMHMM of the words in a language of the words in a language • Each word is a hidden state in Each word is a hidden state in QQ..• Each of the basic sounds in the language is Each of the basic sounds in the language is

a symbol in a symbol in Σ.Σ.

• Input:Input: use speech as the input sequence. use speech as the input sequence.• Goal:Goal: find the most probable sequence of find the most probable sequence of

states.states.

Page 75: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Speech Recognition: Building the Model• Analyze some large source of English Analyze some large source of English

sentences, such as a database of newspaper sentences, such as a database of newspaper articles, to form probability matrixes.articles, to form probability matrixes.

• AA0i0i: the chance that word : the chance that word ii begins a begins a

sentence.sentence.

• AAijij: the chance that word : the chance that word jj follows word follows word ii..

Page 76: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Building the Model (cont’d)

• Analyze English speakers to determine what Analyze English speakers to determine what sounds are emitted with what words.sounds are emitted with what words.

• EEkk(b):(b): the chance that sound the chance that sound bb is spoken in is spoken in

word word kk. Allows for alternate pronunciation . Allows for alternate pronunciation of words.of words.

Page 77: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Speech Recognition: Using the Model• Use the same dynamic programming Use the same dynamic programming

algorithm as beforealgorithm as before• Weave the spoken sounds through the Weave the spoken sounds through the

model the same way we wove the rolls of model the same way we wove the rolls of the die through the casino model.the die through the casino model.

• ππ represents the most likely set of words.represents the most likely set of words.

Page 78: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Using the Model (cont’d)

• How well does it work?How well does it work?• Common words, such as ‘the’, ‘a’, ‘of’ make Common words, such as ‘the’, ‘a’, ‘of’ make

prediction less accurate, since there are so prediction less accurate, since there are so many words that follow normally.many words that follow normally.

Page 79: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Improving Speech Recognition• Initially, we were using a Initially, we were using a ‘bigram,’‘bigram,’ a graph a graph

connecting every two words.connecting every two words.• Expand that to a Expand that to a ‘trigram’‘trigram’

• Each state represents two words spoken in Each state represents two words spoken in succession.succession.

• Each edge joins those two words (Each edge joins those two words (A BA B) to another ) to another state representing (state representing (B CB C))

• Requires nRequires n33 vertices and edges, where n is the vertices and edges, where n is the number of words in the language.number of words in the language.

• Much better, but still limited context.Much better, but still limited context.

Page 80: Hidden Markov Modelsigcf/tabc/HMM.pdf · An Introduction to Bioinformatics Algorithms Hidden Markov Models Ivan Gesteira Costa Filho Centro de Informatica Universidade Federal de

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

References

• Slides for CS 262 course at Stanford given by Serafim Batzoglou