advanced artificial intelligence
DESCRIPTION
Advanced Artificial Intelligence. Lecture 6: Hidden Markov Models and Temporal Filtering. Class-On-A-Slide. X 1. X 2. X 3. X 4. X 5. E 1. E 2. E 3. E 4. E 5. Example: Minerva. Example: Robot Localization. Example: Groundhog. Example: Groundhog. Example: Groundhog. Overview. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/1.jpg)
Advanced Artificial Intelligence
Lecture 6: Hidden Markov Models and Temporal Filtering
![Page 2: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/2.jpg)
Class-On-A-Slide
2
X5X2
E1
X1 X3 X4
E2 E3 E4 E5
![Page 3: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/3.jpg)
Example: Minerva
3
![Page 4: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/4.jpg)
Example: Robot Localization
![Page 5: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/5.jpg)
Example: Groundhog
5
![Page 6: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/6.jpg)
Example: Groundhog
6
![Page 7: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/7.jpg)
Example: Groundhog
7
![Page 8: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/8.jpg)
8
![Page 9: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/9.jpg)
Overview
Markov Chains Hidden Markov Models Particle Filters More on HMMs
9
![Page 10: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/10.jpg)
Reasoning over Time Often, we want to reason about a sequence of
observations Speech recognition Robot localization User attention Medical monitoring Financial modeling
10
![Page 11: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/11.jpg)
Markov Models A Markov model is a chain-structured BN
Each node is identically distributed (stationarity) Value of X at a given time is called the state As a BN:
Parameters: called transition probabilities or dynamics, specify how the state evolves over time (also, initial probs)
X2X1 X3 X4
![Page 12: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/12.jpg)
Conditional Independence
Basic conditional independence: Past and future independent of the present Each time step only depends on the previous This is called the Markov property
Note that the chain is just a (growing) BN We can always use generic BN reasoning on it if we
truncate the chain at a fixed length
X2X1 X3 X4
12
![Page 13: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/13.jpg)
Example: Markov Chain
Weather: States: X = {rain, sun} Transitions:
Initial distribution: 1.0 sun What’s the probability distribution after one step?
rain sun
0.9
0.9
0.1
0.1
This is a CPT, not a
BN!
13
![Page 14: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/14.jpg)
Mini-Forward Algorithm
Question: What’s P(X) on some day t? An instance of variable elimination!
sun
rain
sun
rain
sun
rain
sun
rain
Forward simulation14
![Page 15: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/15.jpg)
Example
From initial observation of sun
From initial observation of rain
P(X1) P(X2) P(X3) P(X)
P(X1) P(X2) P(X3) P(X)15
![Page 16: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/16.jpg)
Stationary Distributions If we simulate the chain long enough:
What happens? Uncertainty accumulates Eventually, we have no idea what the state is!
Stationary distributions: For most chains, the distribution we end up in is
independent of the initial distribution Called the stationary distribution of the chain Usually, can only predict a short time out
![Page 17: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/17.jpg)
Example: Web Link Analysis PageRank over a web graph
Each web page is a state Initial distribution: uniform over pages Transitions:
With prob. c, uniform jump to arandom page (dotted lines, not all shown)
With prob. 1-c, follow a randomoutlink (solid lines)
Stationary distribution Will spend more time on highly reachable pages Google 1.0 returned the set of pages containing all your
keywords in decreasing rank, now all search engines use link analysis along with many other factors (rank actually getting less important over time)
17
![Page 18: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/18.jpg)
Overview
Markov Chains Hidden Markov Models Particle Filters More on HMMs
18
![Page 19: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/19.jpg)
Hidden Markov Models Markov chains not so useful for most agents
Eventually you don’t know anything anymore Need observations to update your beliefs
Hidden Markov models (HMMs) Underlying Markov chain over states S You observe outputs (effects) at each time step As a Bayes net:
X5X2
E1
X1 X3 X4
E2 E3 E4 E5
![Page 20: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/20.jpg)
Example: Robot Localization
t=0Sensor model: never more than 1 mistake
Motion model: may not execute action with small prob.
10Prob
Example from Michael Pfeiffer
![Page 21: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/21.jpg)
Example: Robot Localization
t=1
10Prob
![Page 22: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/22.jpg)
Example: Robot Localization
t=2
10Prob
![Page 23: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/23.jpg)
Example: Robot Localization
t=3
10Prob
![Page 24: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/24.jpg)
Example: Robot Localization
t=4
10Prob
![Page 25: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/25.jpg)
Example: Robot Localization
t=5
10Prob
![Page 26: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/26.jpg)
Hidden Markov Model HMMs have two important independence properties:
Markov hidden process, future depends on past via the present Current observation independent of all else given current state
Quiz: does this mean that observations are mutually independent? [No, correlated by the hidden state]
X5X2
E1
X1 X3 X4
E2 E3 E4 E5
![Page 27: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/27.jpg)
Inference in HMMs (Filtering)
E1
X1 X2X1
![Page 28: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/28.jpg)
Example
An HMM is defined by: Initial distribution: Transitions: Emissions:
![Page 29: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/29.jpg)
Example HMM
![Page 30: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/30.jpg)
Example: HMMs in Robotics
3030
![Page 31: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/31.jpg)
Overview
Markov Chains Hidden Markov Models Particle Filters More on HMMs
31
![Page 32: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/32.jpg)
Particle Filtering Sometimes |X| is too big to use exact
inference |X| may be too big to even store B(X) E.g. X is continuous |X|2 may be too big to do updates
Solution: approximate inference Track samples of X, not all values Samples are called particles Time per step is linear in the number of
samples But: number needed may be large In memory: list of particles, not states
This is how robot localization works in practice
0.0 0.1
0.0 0.0
0.0
0.2
0.0 0.2 0.5
![Page 33: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/33.jpg)
Representation: Particles Our representation of P(X) is now
a list of N particles (samples) Generally, N << |X| Storing map from X to counts
would defeat the point
P(x) approximated by number of particles with value x So, many x will have P(x) = 0! More particles, more accuracy
For now, all particles have a weight of 1
33
Particles: (3,3) (1,2) (3,3) (3,2) (3,3) (3,2) (2,3) (3,3) (3,3) (2,3)
![Page 34: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/34.jpg)
Particle Filtering: Elapse Time Each particle is moved by sampling its
next position from the transition model
This is like prior sampling – samples’ frequencies reflect the transition probs
Here, most samples move clockwise, but some move in another direction or stay in place
This captures the passage of time If we have enough samples, close to the
exact values before and after (consistent)
![Page 35: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/35.jpg)
Particle Filtering: Observe Slightly trickier:
Don’t do rejection sampling (why not?) We don’t sample the observation, we fix it This is similar to likelihood weighting, so
we downweight our samples based on the evidence
Note that, as before, the probabilities don’t sum to one, since most have been downweighted (in fact they sum to an approximation of P(e))
![Page 36: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/36.jpg)
Particle Filtering: Resample Rather than tracking
weighted samples, we resample
N times, we choose from our weighted sample distribution (i.e. draw with replacement)
This is analogous to renormalizing the distribution
Now the update is complete for this time step, continue with the next one
Old Particles: (1,3) w=0.1 (3,2) w=0.9 (3,2) w=0.9 (3,3) w=0.4 (2,3) w=0.3 (2,2) w=0.4 (3,1) w=0.4 (3,3) w=0.4 (2,1) w=0.9 (2,3) w=0.3
New Particles: (2,3) w=1 (3,1) w=1 (3,1) w=1 (3,2) w=1 (2,2) w=1 (3,2) w=1 (3,3) w=1 (3,2) w=1 (3,2) w=1 (3,2) w=1
![Page 37: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/37.jpg)
Particle Filters
![Page 38: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/38.jpg)
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
![Page 39: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/39.jpg)
'd)'()'|()( , xxBelxuxpxBel
Robot Motion
![Page 40: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/40.jpg)
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
![Page 41: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/41.jpg)
Robot Motion 'd)'()'|()( , xxBelxuxpxBel
![Page 42: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/42.jpg)
Particle Filter Algorithm
Sample the next generation for particles using the proposal distribution
Compute the importance weights :weight = target distribution / proposal distribution
Resampling: “Replace unlikely samples by more likely ones”
![Page 43: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/43.jpg)
1. Algorithm particle_filter( St-1, ut-1 zt):
2.
3. For Generate new samples
4. Sample index j(i) from the discrete distribution given by wt-
1
5. Sample from using and
6. Compute importance weight
7. Update normalization factor
8. Insert
9. For
10. Normalize weights
Particle Filter Algorithm
0, tS
ni 1
},{ it
ittt wxSS
itw
itx ),|( 11 ttt uxxp )(
1ij
tx 1tu
)|( itt
it xzpw
ni 1/i
tit ww
![Page 44: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/44.jpg)
Overview
Markov Chains Hidden Markov Models Particle Filters More on HMMs
44
![Page 45: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/45.jpg)
Other uses of HMM
Find most likely sequence of states Viterbi algorithm
Learn HMM parameters from data Baum-Welch (EM) algorithm
Other types of HMMs Continuous, Gaussian-linear: Kalman filter Structured transition/emission probabilities:
Dynamic Bayes network (DBN)
45
![Page 46: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/46.jpg)
Real HMM Examples Speech recognition HMMs:
Observations are acoustic signals (continuous valued) States are specific positions in specific words (so, tens of
thousands)
Machine translation HMMs: Observations are words (tens of thousands) States are translation options (dozens per word)
Robot tracking: Observations are range readings (continuous) States are positions on a map (continuous)
![Page 47: Advanced Artificial Intelligence](https://reader035.vdocuments.site/reader035/viewer/2022062816/5681560d550346895dc3d04f/html5/thumbnails/47.jpg)
HMM Application Domain: Speech
Speech input is an acoustic wave form
s p ee ch l a b
Graphs from Simon Arnfield’s web tutorial on speech, Sheffield:http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/
“l” to “a”transition:
47