dynamic bayesian networks (dbns)

31
Dynamic Bayesian Networks (DBNs) Dave, Hsieh Ding Fei Frank, Yip Keung

Upload: elga

Post on 04-Feb-2016

74 views

Category:

Documents


0 download

DESCRIPTION

Dynamic Bayesian Networks (DBNs). Dave, Hsieh Ding Fei Frank, Yip Keung. Outline. Introduction to DBNs Inference in DBNs Type of inference Exact inference Approximate inference Applications Conclusion. Introduction to DBNs. Motivation Bayesian Network (BN) Models - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Dynamic Bayesian  Networks (DBNs)

Dynamic Bayesian Networks (DBNs)

Dave, Hsieh Ding Fei

Frank, Yip Keung

Page 2: Dynamic Bayesian  Networks (DBNs)

Outline Introduction to DBNs Inference in DBNs

Type of inference Exact inference Approximate inference

Applications Conclusion

Page 3: Dynamic Bayesian  Networks (DBNs)

Introduction to DBNs Motivation

Bayesian Network (BN) Models Static nature of the problem domain Observable quantity is observed once for all Confidence in the observation is true for all time

DBN Domains involving repeated observations Process dynamically evolves over time Examples: Monitoring a patient, traffic monitoring,

etc.

Page 4: Dynamic Bayesian  Networks (DBNs)

Introduction to DBNs Assumptions

The process is modeled as discrete time-slice At time 1, state is X(1) , at time t, state is X(t) P(X(1),…, X(t))=P(X(1)) P(X(1)|X(2) )…P(X(t)|X(1),…, X(t-1))

Markov property Given current state, the next state is independent of

previous states P(X(1),…, X(t))=P(X(1)) P(X(1)|X(2) )…P(X(t)|X(t-1))

Page 5: Dynamic Bayesian  Networks (DBNs)

Introduction to DBNs DBN model (DAG representation)

Edge means how tight the coupling is between nodes

Effect is immediateedge within same time slice Effect is long termedge between time slices

Page 6: Dynamic Bayesian  Networks (DBNs)

Introduction to DBNs Special case of DBN HMM

State of HMM evolves in a Markovian way Model HMM as a simple DBN

Each time slice contains two variables which are state q and observation o

Page 7: Dynamic Bayesian  Networks (DBNs)

Inference Type of Inference

Prediction Given a probability distribution over current state,

predict the distribution over future states

Monitoring Given the observation (evidence) in every time slice t,

maintain the distribution over the current state Belief state at time T P(X(T) | o(1) ,…, o(T))

Page 8: Dynamic Bayesian  Networks (DBNs)

Inference Probability Estimation

Given a sequence of observations in every time slice t,determine the distribution over each intermediate state

P(X(t) | o(1) ,…, o(T)) for t = 1, 2, … , T

Explanation Given an initial state and a sequence of observations

o(1) ,…, o(T), determine the most likely sequence of states X(1) ,…, X(T)

Page 9: Dynamic Bayesian  Networks (DBNs)

Exact inference For most inference tasks, a belief state need to be

maintained belief state

A probability distribution over the current state This state summarize all information about

history Need to be maintained compactly

Page 10: Dynamic Bayesian  Networks (DBNs)

Exact inference How to accomplish exact inference How to do this in a simple DBN HMM

Given a number of time slices, the DBN is just a very long BN with regular structure

Standard Bayesian network algorithms can be used

Probability estimation task Clique tree propagation algorithm Forward-backward algorithm

Page 11: Dynamic Bayesian  Networks (DBNs)

Exact inference Monitoring task

Only the forward pass of forward-backward algorithm

Explanation task Viterbi’s algorithm

Prediction task Only base on the current belief state because it

already have the history information

Page 12: Dynamic Bayesian  Networks (DBNs)

Exact inference dHugin : an exact inference computational

system Inference method of classical discrete time-series

analysis Allows discrete multivariate dynamic system

Page 13: Dynamic Bayesian  Networks (DBNs)

dHugin introduce notion of dynamic time window

Contain several time slice and represent by junction tree Operations: window expansion and reduction Expand window to perform forecasting Inference are formulated in terms of message passing in

junction tree

Page 14: Dynamic Bayesian  Networks (DBNs)

dHugin Window expansion

1. Move k new consecutive time slices to the forecast model

2. Move the k oldest time slices of the forecast model to the time window

3. Moralize the compound graph including the graph in window and the new k slices

4. Triangulate the time window

5. Construct new junction tree

Page 15: Dynamic Bayesian  Networks (DBNs)

dHugin Window reduction Suppose has k+1 time slices in time window

1. make the k oldest slices in time window become k backward smoothing models

2. The remain (k+1)’st slices is the new time window

Page 16: Dynamic Bayesian  Networks (DBNs)

Forecasting Calculate estimates of the distributions of future

variables given past observations and present variables

Forecasting within window Propagation

Forecasting beyond the window1. A series of alternating expansion and reduction

step

2. Propagation performed in each step

Page 17: Dynamic Bayesian  Networks (DBNs)

Problem of Exact inference Drawback: complex and require large space for computations Key issue is how to maintain the belief state

Represent it naively Require an exponential number of entries

Cannot represent it compactly by exploiting the structure no conditional independence structure Variables becomes correlated each other when time goes on Prevent using factorization ideas

Not even conditionally independent within this time slice

Page 18: Dynamic Bayesian  Networks (DBNs)

Approximate Inference

Objective Try to maintain and propagate an approximate

belief state when the state space is very large in dynamic process

It improves the complexity of probabilistic inference

Page 19: Dynamic Bayesian  Networks (DBNs)

Approximate Inference

Two approaches Structural approximation

Ignore weak correlations between variables in a belief state

Stochastic simulation Randomly sample from the states in the belief state

Page 20: Dynamic Bayesian  Networks (DBNs)

Structural Approximation Problems in exact inference

All variables in a belief state are correlated Belief state is expressed as full joint distribution

Need exponential number of table entries Objective of structural approximation

Use factorization in order to represent complex system compactly by exploiting the fact that each variable has weak interaction with each other

Page 21: Dynamic Bayesian  Networks (DBNs)

Structural Approximation Example (monitor a freeway with multiple cars)

States of different cars (e.g velocity,location..etc) become correlated after a certain period of time

Approximation is to assume that the correlations are not very strong

Each car can be considered as independent The approximate belief state can be represented

in a factorized way, as a product of separate distributions, one for each car

Page 22: Dynamic Bayesian  Networks (DBNs)

Structural Approximation We can define a set of disjoint clusters Y1,…, Yk

such that Y = Y1 Y2 … Yk . We maintain an approximate belief state :

If this approximate belief state of time t is simply propagated forward to time t+1, all variables would become correlated again

i

it YtYt )(ˆ)(ˆ )(

Page 23: Dynamic Bayesian  Networks (DBNs)

Structural Approximation It can be solved by executing the below process

At each time t, we take and propagate it to time t+1, obtain a new distribution

Approximate using independent marginal Compute for every I Ie.

The product of each marginal is

t̂1

~t

1~

t

)(1~ )1( t

iYt

)1()1( /

)1()1( )(1~

)(1~

ti

t YY

tti YtYt

1ˆ t

Page 24: Dynamic Bayesian  Networks (DBNs)

Structural Approximation Two sources of error

The accumulated error results from propagation The error results from approximation of

Errors are bounded due to two opposing forces Propagation from time t to time t+1 adds noise to

exact and approximate belief state reduce difference between them reduce error

Approximation increase error

1~

t

Page 25: Dynamic Bayesian  Networks (DBNs)

Stochastic Simulation

Likelihood Weighting (LW) Find the approximate belief state using sampling

Algorithm of LW

Page 26: Dynamic Bayesian  Networks (DBNs)

Stochastic Simulation

Drawback LW generates the samples at time t according to

prior distribution (depends on condition of samples at time t-1)

Observation affects the weights, but not the choice of samples

Samples generated get increasingly irrelevant when time grows as some samples are not likely to happen to explain the current observation

Example of monitoring car’s location

Page 27: Dynamic Bayesian  Networks (DBNs)

Stochastic Simulation

Samples at t = 5 are more distributed, far away from exact location of vehicle

An improved algorithm called Survival-Of-Fittest is used

Page 28: Dynamic Bayesian  Networks (DBNs)

Stochastic Simulation Survival-Of-Fittest (SOF)

Propagate likely samples more often than unlikely samples

Algorithm of SOF

Page 29: Dynamic Bayesian  Networks (DBNs)

Stochastic Simulation

Belief state propagation over time

(a) exact belief state

(b) belief state by using LW

(b) belief state by using SOF

Page 30: Dynamic Bayesian  Networks (DBNs)

Application Robot localization

Track a robot moving around in an environment State variables

x, y location Orientation

Transition model corresponds to motion Next position is a Gaussian around a linear function of

current position Observation model

Probability that sonar detect an obstacle

Page 31: Dynamic Bayesian  Networks (DBNs)

Conclusion Concept DBNs Inference in DBNs

Four types of inference Exact inference

dHugin Approximate inference

Structural approximation Search –based Stochastic simulation

Applications robot localization