cs 416 artificial intelligence lecture 19 reasoning over time chapter 15 lecture 19 reasoning over...

21
CS 416 Artificial Intelligence Lecture 19 Lecture 19 Reasoning over Time Reasoning over Time Chapter 15 Chapter 15

Upload: bertram-lambert

Post on 17-Jan-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

CS 416Artificial Intelligence

Lecture 19Lecture 19

Reasoning over TimeReasoning over Time

Chapter 15Chapter 15

Lecture 19Lecture 19

Reasoning over TimeReasoning over Time

Chapter 15Chapter 15

Page 2: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Hidden Markov Models (HMMs)

Represent the state of the world with a single Represent the state of the world with a single discrete variablediscrete variable

• If your state has multiple variables, form one variable whose If your state has multiple variables, form one variable whose value takes on all possible tuples of multiple variablesvalue takes on all possible tuples of multiple variables

– A two-variable system (heads/tails and red/green/blue) A two-variable system (heads/tails and red/green/blue) becomesbecomes

A single-variable system with six values (heads/red, A single-variable system with six values (heads/red, tails/red, …)tails/red, …)

Represent the state of the world with a single Represent the state of the world with a single discrete variablediscrete variable

• If your state has multiple variables, form one variable whose If your state has multiple variables, form one variable whose value takes on all possible tuples of multiple variablesvalue takes on all possible tuples of multiple variables

– A two-variable system (heads/tails and red/green/blue) A two-variable system (heads/tails and red/green/blue) becomesbecomes

A single-variable system with six values (heads/red, A single-variable system with six values (heads/red, tails/red, …)tails/red, …)

Page 3: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

HMMs

• Let number of states be SLet number of states be S

– Transition model T is an SxS matrix filled by P( XTransition model T is an SxS matrix filled by P( X tt | X | Xt-1t-1 ) )

Probability of transitioning from any state to anotherProbability of transitioning from any state to another

– Consider obtaining evidence eConsider obtaining evidence e tt at each timestep at each timestep

Construct an SxS matrix O consisting of P( eConstruct an SxS matrix O consisting of P( e tt | X | Xtt = i ) = i )

along the diagonal and zero elsewherealong the diagonal and zero elsewhere

• Let number of states be SLet number of states be S

– Transition model T is an SxS matrix filled by P( XTransition model T is an SxS matrix filled by P( X tt | X | Xt-1t-1 ) )

Probability of transitioning from any state to anotherProbability of transitioning from any state to another

– Consider obtaining evidence eConsider obtaining evidence e tt at each timestep at each timestep

Construct an SxS matrix O consisting of P( eConstruct an SxS matrix O consisting of P( e tt | X | Xtt = i ) = i )

along the diagonal and zero elsewherealong the diagonal and zero elsewhere

Page 4: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

HMMs

Rewriting the FORWARD algorithmRewriting the FORWARD algorithm

• Constructing the predicted sequence of states from 0Constructing the predicted sequence of states from 0t+1 t+1 given egiven e00 e et+1t+1

– ff1:t+11:t+1 = = FORWARD (fFORWARD (f1:t1:t, e, et+1t+1))

Rewriting the FORWARD algorithmRewriting the FORWARD algorithm

• Constructing the predicted sequence of states from 0Constructing the predicted sequence of states from 0t+1 t+1 given egiven e00 e et+1t+1

– ff1:t+11:t+1 = = FORWARD (fFORWARD (f1:t1:t, e, et+1t+1))

Page 5: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

HMMs

OptimizationsOptimizations

• FORWARD and BACKWARD can be written in matrix formFORWARD and BACKWARD can be written in matrix form

• Matrix forms permit reinspection for speedupsMatrix forms permit reinspection for speedups

– Consult book if interested in these for assignmentConsult book if interested in these for assignment

OptimizationsOptimizations

• FORWARD and BACKWARD can be written in matrix formFORWARD and BACKWARD can be written in matrix form

• Matrix forms permit reinspection for speedupsMatrix forms permit reinspection for speedups

– Consult book if interested in these for assignmentConsult book if interested in these for assignment

Page 6: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Speech recognition vs. Speech understanding

RecognitionRecognition

• Convert acoustic signal into wordsConvert acoustic signal into words

– P (words | signal) = P (words | signal) = P (signal | words) P (words) P (signal | words) P (words)

UnderstandingUnderstanding

• Recognizing the context and semantics of the wordsRecognizing the context and semantics of the words

RecognitionRecognition

• Convert acoustic signal into wordsConvert acoustic signal into words

– P (words | signal) = P (words | signal) = P (signal | words) P (words) P (signal | words) P (words)

UnderstandingUnderstanding

• Recognizing the context and semantics of the wordsRecognizing the context and semantics of the words

We have a model of this

We have a model of this too

Page 7: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Applications

• NaturallySpeaking (interesting story from Wired), Viavoice…NaturallySpeaking (interesting story from Wired), Viavoice…

– 90% hit rate is 10% error rate90% hit rate is 10% error rate

– want 98% or 99% success rate want 98% or 99% success rate

• DictationDictation

– Cheaper to play doctor’s audio tapes into telephone so Cheaper to play doctor’s audio tapes into telephone so someone in India can type the text and email it backsomeone in India can type the text and email it back

• User-control of devicesUser-control of devices

– ““Call home”Call home”

• NaturallySpeaking (interesting story from Wired), Viavoice…NaturallySpeaking (interesting story from Wired), Viavoice…

– 90% hit rate is 10% error rate90% hit rate is 10% error rate

– want 98% or 99% success rate want 98% or 99% success rate

• DictationDictation

– Cheaper to play doctor’s audio tapes into telephone so Cheaper to play doctor’s audio tapes into telephone so someone in India can type the text and email it backsomeone in India can type the text and email it back

• User-control of devicesUser-control of devices

– ““Call home”Call home”

Page 8: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Spectrum of choicesConstrained Constrained

DomainDomainUnconstrained Unconstrained

DomainDomain

Speaker Speaker DependentDependent

Voice tags (e.g. Voice tags (e.g. cell phone)cell phone)

Trained Dictation Trained Dictation (Viavoice)(Viavoice)

Speaker Speaker IndependentIndependent

Customer ServiceCustomer Service

(say “one”)(say “one”)What everyone What everyone

wantswants

Page 9: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Waveform to phonemes

• 40 – 50 40 – 50 phonesphones (sounds) in all human languages (sounds) in all human languages

• 48 48 phonemesphonemes (distinguishable unts) in English (according to (distinguishable unts) in English (according to ARPAbet)ARPAbet)

– Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]

Nothing is precise here, so HMM with state variable XNothing is precise here, so HMM with state variable X tt

corresponding to the phone uttered at time tcorresponding to the phone uttered at time t

• P (EP (Ett | X | Xtt): given phoneme, what is its waveform): given phoneme, what is its waveform

– Must have models that adjust for pitch, speed, volume…Must have models that adjust for pitch, speed, volume…

• 40 – 50 40 – 50 phonesphones (sounds) in all human languages (sounds) in all human languages

• 48 48 phonemesphonemes (distinguishable unts) in English (according to (distinguishable unts) in English (according to ARPAbet)ARPAbet)

– Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]Ceiling = [s iy l ih ng] [s iy l ix ng] [s iy l en]

Nothing is precise here, so HMM with state variable XNothing is precise here, so HMM with state variable X tt

corresponding to the phone uttered at time tcorresponding to the phone uttered at time t

• P (EP (Ett | X | Xtt): given phoneme, what is its waveform): given phoneme, what is its waveform

– Must have models that adjust for pitch, speed, volume…Must have models that adjust for pitch, speed, volume…

Page 10: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Analog to digital (A to D)

• Diaphragm of microphone is displaced by movement of airDiaphragm of microphone is displaced by movement of air

• Analog to digital converter samples the signal at discrete time Analog to digital converter samples the signal at discrete time intervals (8 – 16 kHz, 8-bit for speech)intervals (8 – 16 kHz, 8-bit for speech)

• Diaphragm of microphone is displaced by movement of airDiaphragm of microphone is displaced by movement of air

• Analog to digital converter samples the signal at discrete time Analog to digital converter samples the signal at discrete time intervals (8 – 16 kHz, 8-bit for speech)intervals (8 – 16 kHz, 8-bit for speech)

Page 11: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Data compression

• 8kHz at 8 bits is 0.5 MB for one minute of speech8kHz at 8 bits is 0.5 MB for one minute of speech

– Too much information for constructing P(XToo much information for constructing P(X t+1t+1 | X | Xtt) tables) tables

– Reduce signal to overlapping Reduce signal to overlapping frames frames (10 msecs)(10 msecs)

– frames haveframes have features features that are evaluated based on signalthat are evaluated based on signal

• 8kHz at 8 bits is 0.5 MB for one minute of speech8kHz at 8 bits is 0.5 MB for one minute of speech

– Too much information for constructing P(XToo much information for constructing P(X t+1t+1 | X | Xtt) tables) tables

– Reduce signal to overlapping Reduce signal to overlapping frames frames (10 msecs)(10 msecs)

– frames haveframes have features features that are evaluated based on signalthat are evaluated based on signal

Page 12: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

More data compression

Features are still too bigFeatures are still too big

• Consider n features with 256 values eachConsider n features with 256 values each

– 256256nn possible frames possible frames

• A table of P (features | phones) would be too largeA table of P (features | phones) would be too large

• Cluster!Cluster!

– Reduce number of options from 256Reduce number of options from 256nn to something to something manageablemanageable

Features are still too bigFeatures are still too big

• Consider n features with 256 values eachConsider n features with 256 values each

– 256256nn possible frames possible frames

• A table of P (features | phones) would be too largeA table of P (features | phones) would be too large

• Cluster!Cluster!

– Reduce number of options from 256Reduce number of options from 256nn to something to something manageablemanageable

Page 13: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Phone subdivision

Phones last 5-10 framesPhones last 5-10 frames

• Possible to subdivide a phone into three partsPossible to subdivide a phone into three parts

– Onset, mid, endOnset, mid, end

– [t] = [silent beginning, small explosion, hissing end][t] = [silent beginning, small explosion, hissing end]

• The sound of a phone changes based on surrounding phonesThe sound of a phone changes based on surrounding phones

– Brain coordinates ending of one phone and beginning of upcoming Brain coordinates ending of one phone and beginning of upcoming ones (coarticulation)ones (coarticulation)

– Sweet vs. stopSweet vs. stop

• State space is increased, but improved accuracyState space is increased, but improved accuracy

Phones last 5-10 framesPhones last 5-10 frames

• Possible to subdivide a phone into three partsPossible to subdivide a phone into three parts

– Onset, mid, endOnset, mid, end

– [t] = [silent beginning, small explosion, hissing end][t] = [silent beginning, small explosion, hissing end]

• The sound of a phone changes based on surrounding phonesThe sound of a phone changes based on surrounding phones

– Brain coordinates ending of one phone and beginning of upcoming Brain coordinates ending of one phone and beginning of upcoming ones (coarticulation)ones (coarticulation)

– Sweet vs. stopSweet vs. stop

• State space is increased, but improved accuracyState space is increased, but improved accuracy

Page 14: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Words

You say [t ow m ey t ow]You say [t ow m ey t ow]

• P (t ow m ey t ow | “tomato”)P (t ow m ey t ow | “tomato”)

I say [t ow m aa t ow]I say [t ow m aa t ow]

You say [t ow m ey t ow]You say [t ow m ey t ow]

• P (t ow m ey t ow | “tomato”)P (t ow m ey t ow | “tomato”)

I say [t ow m aa t ow]I say [t ow m aa t ow]

Page 15: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Words - coarticulation

The first syllable changes based on dialectThe first syllable changes based on dialect

There are four ways to say “tomato” and we would store There are four ways to say “tomato” and we would store P( [pronunciation] | “tomato”) for eachP( [pronunciation] | “tomato”) for each

• Remember diagram would have three stages per phoneRemember diagram would have three stages per phone

The first syllable changes based on dialectThe first syllable changes based on dialect

There are four ways to say “tomato” and we would store There are four ways to say “tomato” and we would store P( [pronunciation] | “tomato”) for eachP( [pronunciation] | “tomato”) for each

• Remember diagram would have three stages per phoneRemember diagram would have three stages per phone

Page 16: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Words - segmentation

““Hearing” words in sentences seems easy to usHearing” words in sentences seems easy to us

• Waveforms are fuzzyWaveforms are fuzzy

• There are no clear gaps to designate word boundariesThere are no clear gaps to designate word boundaries

• One must work the probabilities to decide if current word is One must work the probabilities to decide if current word is continuing with another syllable or if it seems likely that continuing with another syllable or if it seems likely that another word is startinganother word is starting

““Hearing” words in sentences seems easy to usHearing” words in sentences seems easy to us

• Waveforms are fuzzyWaveforms are fuzzy

• There are no clear gaps to designate word boundariesThere are no clear gaps to designate word boundaries

• One must work the probabilities to decide if current word is One must work the probabilities to decide if current word is continuing with another syllable or if it seems likely that continuing with another syllable or if it seems likely that another word is startinganother word is starting

Page 17: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Sentences

Bigram ModelBigram Model

• P (wP (wii | w | w1:i-11:i-1) has a lot of values to determine) has a lot of values to determine

• P (wP (wii | w | wi-1i-1) is much more manageable) is much more manageable

– We make a first-order Markov assumption about word We make a first-order Markov assumption about word sequencessequences

– Easy to train this through text filesEasy to train this through text files

• Much more complicated models are possible that take syntax Much more complicated models are possible that take syntax and semantics into accountand semantics into account

Bigram ModelBigram Model

• P (wP (wii | w | w1:i-11:i-1) has a lot of values to determine) has a lot of values to determine

• P (wP (wii | w | wi-1i-1) is much more manageable) is much more manageable

– We make a first-order Markov assumption about word We make a first-order Markov assumption about word sequencessequences

– Easy to train this through text filesEasy to train this through text files

• Much more complicated models are possible that take syntax Much more complicated models are possible that take syntax and semantics into accountand semantics into account

Page 18: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Bringing it together

Each transformation is pretty inaccurateEach transformation is pretty inaccurate

• Lots of choicesLots of choices

• User “error” – stutters, bad grammarUser “error” – stutters, bad grammar

• Subsequent steps can rule out choices from previous stepsSubsequent steps can rule out choices from previous steps

– DisambiguationDisambiguation

Each transformation is pretty inaccurateEach transformation is pretty inaccurate

• Lots of choicesLots of choices

• User “error” – stutters, bad grammarUser “error” – stutters, bad grammar

• Subsequent steps can rule out choices from previous stepsSubsequent steps can rule out choices from previous steps

– DisambiguationDisambiguation

Page 19: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Bringing it together

Continuous speechContinuous speech

• Words composed of Words composed of pp 3-state phones 3-state phones

• WW words in vocabulary words in vocabulary

• 3pW3pW states in HMM states in HMM

– 10 words, 4 phones each, 3 states per phone = 120 states10 words, 4 phones each, 3 states per phone = 120 states

• Compute likelihood of all words in sequenceCompute likelihood of all words in sequence

– Viterbi algorithm from 15.2Viterbi algorithm from 15.2

Continuous speechContinuous speech

• Words composed of Words composed of pp 3-state phones 3-state phones

• WW words in vocabulary words in vocabulary

• 3pW3pW states in HMM states in HMM

– 10 words, 4 phones each, 3 states per phone = 120 states10 words, 4 phones each, 3 states per phone = 120 states

• Compute likelihood of all words in sequenceCompute likelihood of all words in sequence

– Viterbi algorithm from 15.2Viterbi algorithm from 15.2

Page 20: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

A final note

Where do all the transition tables come from?Where do all the transition tables come from?

• Word probabilities from text analysisWord probabilities from text analysis

• Pronunciation models have been manually constructed for Pronunciation models have been manually constructed for many hours of speakingmany hours of speaking

– Some have multiple-state phones identifiedSome have multiple-state phones identified

• Because this Because this annotationannotation is so expensive to perform, can we is so expensive to perform, can we annotate or annotate or label label the waveforms automatically?the waveforms automatically?

Where do all the transition tables come from?Where do all the transition tables come from?

• Word probabilities from text analysisWord probabilities from text analysis

• Pronunciation models have been manually constructed for Pronunciation models have been manually constructed for many hours of speakingmany hours of speaking

– Some have multiple-state phones identifiedSome have multiple-state phones identified

• Because this Because this annotationannotation is so expensive to perform, can we is so expensive to perform, can we annotate or annotate or label label the waveforms automatically?the waveforms automatically?

Page 21: CS 416 Artificial Intelligence Lecture 19 Reasoning over Time Chapter 15 Lecture 19 Reasoning over Time Chapter 15

Expectation Maximization (EM)

Learn HMM transition and sensor models sans Learn HMM transition and sensor models sans labeled datalabeled data

• Initialize models with hand-labeled dataInitialize models with hand-labeled data

• Use these models to predict states at multiple times tUse these models to predict states at multiple times t

• Use these predictions as if they were “fact” and update HMM Use these predictions as if they were “fact” and update HMM transition table and sensor modelstransition table and sensor models

• RepeatRepeat

Learn HMM transition and sensor models sans Learn HMM transition and sensor models sans labeled datalabeled data

• Initialize models with hand-labeled dataInitialize models with hand-labeled data

• Use these models to predict states at multiple times tUse these models to predict states at multiple times t

• Use these predictions as if they were “fact” and update HMM Use these predictions as if they were “fact” and update HMM transition table and sensor modelstransition table and sensor models

• RepeatRepeat