"speech recognition" - hidden markov models @ papers we love bucharest

23
“Hidden Markov Models for Speech Recognition” by B.H. Juang and L.R. Rabiner Papers We Love Bucharest Stefan Alexandru Adam 9st of November 2015 TechHub

Upload: stefan-adam

Post on 13-Jan-2017

119 views

Category:

Software


0 download

TRANSCRIPT

Page 1: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

“Hidden Markov Models for Speech Recognition”

by B.H. Juang and L.R. Rabiner

Papers We Love Bucharest Stefan Alexandru Adam 9st of November 2015

TechHub

Page 2: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

The history of Automated Speech Recognition (ASR)• 1950-1960 Baby Talk (only digits)• 1970 Speech Recognition Takes Off (1011 words) • 1980 Big Revolution - Prediction based approach• Discovery the use of HMM in Speech Recognition• See Automatic Speech Recognition—A Brief History of the Technology

Development by B.H. Juang and Lawrence R. Rabiner• 1990s: Automatic Speech Recognition Comes to the Masses• 2000 computer toped out 80% accuracy• Now Siri, Google Voice, Cortana

Page 3: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

Speech signal

• The voice band is 0-4000 Hz• In telephony 300 – 3400 Hz• The sampling according to Nyquist –Shannon sampling theorem should be at

least 8000 Hz (twice as signal frequency)

• Phoneme• The smallest unit of a phonetic language

Page 4: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR Model

Features ExtractionPreprocessing

Audio Signal Detection Engine

Voice signal Features Vector Words

Page 5: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR Model – Preprocessing

( = 0.97)Audio SignalPre-emphasis

Yes

No

Noise Cancelation𝑥 (𝑛) ŝ (𝑛 ) 𝑠1 (𝑛) Voice Activity

Detection

𝑥1 (𝑛)

It is assumed that the initial part of the signal (usually 200 ms) is noise and silence.

𝑉𝐴𝐷 (𝑚 )={1,𝑊 𝑠1 (𝑚 )≥ 𝑡𝑤0 ,𝑊 𝑠1

(𝑚 )<𝑡𝑤, where is equal with computed for the first 200 ms. The usually depends on signal energy, power, zero crossing rate

Page 6: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR Model – Features Extraction. Frame Blocking and Windowing

WindowingFrame Blocking𝑥1 (𝑛) 𝑥1 (𝑘;𝑚 ) 𝑥2 (𝑘 ;𝑚 )

Each frame is K samples long and overlaps the previous one with P samples

𝑥1 (𝑛)

p

K

In order to remove discontinuities a Hamming window is applied

Page 7: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Features ExtractionExtract the sensitive information from the signal• This information is presented as a double vector

There are three main approaches• Linear Prediction Coding (LPC)• Mel Frequency Cepstral Coefficient (MFCC)• Perceptual Linear Prediction (PLP)

Page 8: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Features Extraction - LPC

The idea is to determine the transfer function of the sound. Given a speech sample at time , can be modelled as a linear combination of the past p speech samples.

𝑠 (𝑛)=𝑏0𝑢 (𝑛)+∑1

𝑝

𝑎𝑖 𝑠(𝑛−𝑖)𝑢 (𝑛)

𝑏0

𝐻 (𝑧 ) 𝑠 (𝑛)

Make Autocorrelation Levinson Durbin

𝑥2 (𝑘 ;𝑚 ) 𝑟 𝑥2𝑥2 (𝑝 ;𝑚 ) [𝑏𝑜 ,𝑎1 ,…𝑎𝑝]𝑚 usually p =13

Page 9: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Features Extraction - MFCC

• Mel scale• Is a perceptual scale of pitches judged by listener

to be equal in distance from one another• Converts hertz into mel.

Hz 20 160 394 670 1000 1420 1900 2450 3120 4000 5100 6600 9000 14000

mel 0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250

Page 10: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Features Extraction – MFCCMel Filter banksSteps1. Establish a lower and a higher frequency (typically 0 - 4000 Hz)2. Convert this interval to a mel interval3. Divide this interval in equal parts (typically 26 - 40 filter banks)4. Convert back to hertz5. Define the filter bank function

𝐻𝑚 (𝑘 )={0 ,𝑘< 𝑓 (𝑚−1)

𝑘− 𝑓 (𝑚−1)𝑓 (𝑚 )− 𝑓 (𝑚−1) , 𝑓

(𝑚−1 )≤𝑘≤ 𝑓 (𝑚)

𝑓 (𝑚+1 )−𝑘𝑓 (𝑚+1 )− 𝑓 (𝑚)

, 𝑓 (𝑚 )≤𝑘≤ 𝑓 (𝑚+1)

0 ,𝑘> 𝑓 (𝑚+1)

Page 11: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Features Extraction – MFCCMel Filter banks

Page 12: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Features Extraction - MFCC

Features Extraction Flow

𝐷𝐹𝑇 ¿ 𝑋 2(𝑛 ;𝑚)∨¿ Mel scaled Filter bank

𝐷𝐶𝑇 log (𝑚𝑘)Keep only first values

𝑥2(𝑘 ,𝑚) 𝑋 2(𝑛 ;𝑚) ¿ 𝑋 2(𝑛 ;𝑚)∨¿

𝑚𝑘

𝑐h(𝑛 ,𝑚) 𝑐𝑠(𝑛 ;𝑚) log (𝑚𝑘)

- Signal Energy E, and coefficients are usually included in the output

Page 13: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – Detection Engine

• Pattern matching• Hidden Markov Models• Neural Networks

Page 14: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

Markov Model

Markov Model is used to model randomly changing systems where future states depend only on the present state.

is called a Markov Process S = set of states Π = initial state probabilities: A = transition probabilities:

Page 15: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

Hidden Markov Model

V = output alphabet = {v1,v2,…,vm}B = output emission probabilities: Notice that B doesn’t depend on time. The states of the system are hidden and we observe only the emissions.

Page 16: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

Hidden Markov Model

𝑋𝑇− 1 𝑋𝑇 𝑋𝑇+1

𝑉 𝑇− 1 𝑉 𝑇𝑉 𝑇+1

t = 1;start in state with probability (i.e., );forever do

move from state to state with prob. (i.e., );emit observation symbol with probability;t = t + 1;

Page 17: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

Hidden Markov Model

Questions1. Given the current and last t-i emissions, what is the

probability of the system beeing in state? 2. Knowing the current state what is the probability to

observe the following n emissions?3. Knowing the emissions sequence what is the most likely

states sequence?4. Knowing the emissions sequence what is the probability

of the system to be in state where k<n?

Page 18: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

Hidden Markov Model

For each question there is a separated dynamic programming algorithm.

1. Forward Algorithm (recursive computation)2. Backward Algorithm (recursive computation)3. Viterbi algorithm4. Forward – Backward Algorithm based on 1 and 25. Baum-Welch Algorithm based on 4

Page 19: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – HMM Detection Engine

Vector Quantization

Codebook

{} – training features

Discrete HMM𝑐h(𝑛 ,𝑚)

Learning

Detecting

Word/phoneme

Page 20: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – HMM Detection EngineVector Quantization • Apply vector quantization and define the Codebook• Usually the length of the Codebook is several hundreds• Apply K-Means algorithm

Page 21: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – HMM Detection EngineTrainingGiven a HMM Record samples for a specific word or phoneme.Using the quantized features extracted train the model using Viterbi or Baum-Welch. - quantized features play the role of an Observations Sequence - usually the number of training samples is 10. The samples can be grouped in two categories (man and woman).Label (tag) the model with the corresponding word or phoneme.

Page 22: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – HMM Detection EngineDetectionGiven a quantized features sequence iterates throw each existing HMM and find out which has the maximum probability. Detection properties - Detection is very fast (also easy to parallelize) - Accuracy 80 % in the base implementation - Limitations – doesn’t take in consideration observation duration

Page 23: "Speech recognition" - Hidden Markov Models @ Papers We Love Bucharest

ASR – Model – HMM Detection EngineDemo• https://github.com/AdamStefan/Speech-Recognition