[ieee 2013 ieee global conference on signal and information processing (globalsip) - austin, tx, usa...

4
PARTICLE FILTERING APPROACH TO STATE ESTIMATION IN BOOLEAN DYNAMICAL SYSTEMS Ulisses Braga-Neto Department of Electrical and Computer Engineering Texas A&M University College Station, Texas 77843 E-mail: [email protected] ABSTRACT Exact optimal state estimation for discrete-time Boolean dy- namical systems may become impractical computationally if system dimensionality is large. In this paper, we consider a particle filtering approach to address this problem. The methodology is illustrated through application to state track- ing in high-dimensional Boolean network models. The re- sults show that the particle filter can be very accurate under a moderate number of particles. The impact of resampling on performance is also investigated. Index TermsBoolean Dynamical Systems, Boolean Networks, Optimal State Estimation, Particle Filtering, Se- quential Monte-Carlo Methods. 1. INTRODUCTION We investigate in this paper the application of particle filtering [1–3] to the problem of optimal estimation in dynamical sys- tems consisting of Boolean state variables governed by net- works of logical gates updated and observed through noise at discrete time intervals. Such models are of great interest in Genomic Signal Processing and other application areas. In [4, 5], a signal model for Boolean dynamic systems was introduced. It was shown that this model includes as special cases the Boolean Network (BN), Boolean Network with perturbation (BNp), and Probabilistic Boolean Network (PBN) models. The optimal recursive MMSE estimator for the proposed signal model was called the Boolean Kalman Filter (BKF) and an algorithm for its computation was pro- vided. If full knowledge about the model is available, the BKF provides the exact optimal state estimator. However, for large numbers of state variables, the computation of the BKF becomes impractical. Here we propose a Particle Filtering al- gorithm that successfully addresses the high-dimensionality problem. Numerical state-tracking experiments with high- dimensional models demonstrate the accuracy of the pro- posed method. This paper is organized as follows. Section 2 reviews the Boolean signal model and its MMSE estimator, the Boolean Kalman Filter. Section 3 introduces the particle filter algo- rithm for state estimation under this model. Section 4 presents numerical results. Finally, Section 5 provides a summary and conclusions. 2. BOOLEAN SIGNAL MODEL Assume that the system is described by a state process{X k ; k = 0, 1,...}, where X k ∈{0, 1} d is a Boolean vector of size d. The state is observed indirectly through the observation pro- cess {Y k ; k = 1, 2,...}, where Y k is a general vector of any number of continuous or discrete measurements. The state and observation processes satisfy the signal model specified by: X k = f (X k-1 )⊕ n k (state model) Y k = h (X k , v k ) (observation model) (1) for k = 1, 2,.... Here,“” indicates component-wise modulo- 2 addition, f ∶{0, 1} d →{0, 1} d is an arbitrary network func- tion, which expresses a logical relationship between the state vectors at consecutive time points, h ∶{0, 1} d O is a gen- eral function mapping the current state into the observation space O, whereas {n k , v k ; k = 1, 2,...} are white noise pro- cesses, with n k ∈{0, 1} d and v k O. The noise processes are “white” in the sense that the noises at distinct time points are uncorrelated random variables. It is also assumed that the noise processes are uncorrelated with each other and with the state process. The optimal filtering problem consists of finding an esti- mator ˆ X k = h(Y 1 ,..., Y k ) of the state X k that optimizes a given performance criterion among all possible functions of Y 1 ,..., Y k . The two criteria considered here are the condi- tional mean-square error (MSE): MSE(Y 1 ,..., Y k )= E ˆ X k - X k 2 Y k ,..., Y 1 (2) and the (unconditional) mean-square error MSE = E ˆ X k - X k 2 = E [ MSE(Y 1 ,..., Y k )] . (3) 81 978-1-4799-0248-4/13/$31.00 ©2013 IEEE GlobalSIP 2013

Upload: ulisses

Post on 13-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP) - Austin, TX, USA (2013.12.3-2013.12.5)] 2013 IEEE Global Conference on Signal and Information Processing

PARTICLE FILTERING APPROACH TO STATE ESTIMATIONIN BOOLEAN DYNAMICAL SYSTEMS

Ulisses Braga-Neto

Department of Electrical and Computer EngineeringTexas A&M University

College Station, Texas 77843E-mail: [email protected]

ABSTRACT

Exact optimal state estimation for discrete-time Boolean dy-namical systems may become impractical computationally ifsystem dimensionality is large. In this paper, we considera particle filtering approach to address this problem. Themethodology is illustrated through application to state track-ing in high-dimensional Boolean network models. The re-sults show that the particle filter can be very accurate under amoderate number of particles. The impact of resampling onperformance is also investigated.

Index Terms— Boolean Dynamical Systems, BooleanNetworks, Optimal State Estimation, Particle Filtering, Se-quential Monte-Carlo Methods.

1. INTRODUCTION

We investigate in this paper the application of particle filtering[1–3] to the problem of optimal estimation in dynamical sys-tems consisting of Boolean state variables governed by net-works of logical gates updated and observed through noise atdiscrete time intervals. Such models are of great interest inGenomic Signal Processing and other application areas.

In [4, 5], a signal model for Boolean dynamic systemswas introduced. It was shown that this model includes asspecial cases the Boolean Network (BN), Boolean Networkwith perturbation (BNp), and Probabilistic Boolean Network(PBN) models. The optimal recursive MMSE estimator forthe proposed signal model was called the Boolean KalmanFilter (BKF) and an algorithm for its computation was pro-vided.

If full knowledge about the model is available, the BKFprovides the exact optimal state estimator. However, forlarge numbers of state variables, the computation of the BKFbecomes impractical. Here we propose a Particle Filtering al-gorithm that successfully addresses the high-dimensionalityproblem. Numerical state-tracking experiments with high-dimensional models demonstrate the accuracy of the pro-posed method.

This paper is organized as follows. Section 2 reviews theBoolean signal model and its MMSE estimator, the BooleanKalman Filter. Section 3 introduces the particle filter algo-rithm for state estimation under this model. Section 4 presentsnumerical results. Finally, Section 5 provides a summary andconclusions.

2. BOOLEAN SIGNAL MODEL

Assume that the system is described by a state process{Xk;k =0,1, . . .}, where Xk ∈ {0,1}d is a Boolean vector of size d.The state is observed indirectly through the observation pro-cess {Yk;k = 1,2, . . .}, where Yk is a general vector of anynumber of continuous or discrete measurements. The stateand observation processes satisfy the signal model specifiedby:

Xk = f (Xk−1) ⊕ nk (state model)

Yk = h (Xk,vk) (observation model)(1)

for k = 1,2, . . .. Here,“⊕” indicates component-wise modulo-2 addition, f ∶ {0,1}d → {0,1}d is an arbitrary network func-tion, which expresses a logical relationship between the statevectors at consecutive time points, h ∶ {0,1}d → O is a gen-eral function mapping the current state into the observationspace O, whereas {nk,vk;k = 1,2, . . .} are white noise pro-cesses, with nk ∈ {0,1}d and vk ∈ O. The noise processesare “white” in the sense that the noises at distinct time pointsare uncorrelated random variables. It is also assumed that thenoise processes are uncorrelated with each other and with thestate process.

The optimal filtering problem consists of finding an esti-mator X̂k = h(Y1, . . . ,Yk) of the state Xk that optimizes agiven performance criterion among all possible functions ofY1, . . . ,Yk. The two criteria considered here are the condi-tional mean-square error (MSE):

MSE(Y1, . . . ,Yk) = E [∣∣X̂k −Xk ∣∣2 ∣ Yk, . . . ,Y1] (2)

and the (unconditional) mean-square error

MSE = E [∣∣X̂k −Xk ∣∣2] = E [MSE(Y1, . . . ,Yk) ] . (3)

81978-1-4799-0248-4/13/$31.00 ©2013 IEEE GlobalSIP 2013

Page 2: [IEEE 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP) - Austin, TX, USA (2013.12.3-2013.12.5)] 2013 IEEE Global Conference on Signal and Information Processing

The BKF provides the MMSE state estimator, accordingto both criteria above, and may be computed exactly in a re-cursive fashion, as shown in [4]. Briefly, let (x1, . . . ,x2d)be an arbitrary enumeration of the possible state vectors. Foreach time k = 1,2, . . . define the posterior distribution vectors(PDV) Πk∣k and Πk∣k−1 of length 2d by means of

Πk∣k(i) = P (Xk = xi ∣ Yk, . . . ,Y1) , (4)

Πk∣k−1(i) = P (Xk = xi ∣ Yk−1, . . . ,Y1) , (5)

for i = 1, . . . ,2d. Let the prediction matrix Mk of size2d × 2d be the transition matrix of the Markov chain de-fined by the state model: (Mk)ij = P (Xk = xi ∣ Xk−1 =xj) = P (nk = xi ⊕ f(xj)), for i, j = 1, . . . ,2d. Addi-tionally, given a value of the observation vector y, let theupdate matrix Tk(y), also of size 2d × 2d, be a diago-nal matrix defined by the observation model: (Tk(y))jj =pvk

(h(Xk,vk) = y ∣ Xk = xj), for j = 1, . . . ,2d. Finally,

define the matrix A of size d × 2d via A = [x1⋯x2d].The following result, which appears in [4], gives a proce-

dure to compute the MMSE state estimator.

Theorem 1. (Boolean Kalman Filter.) The optimal mini-mum MSE estimator X̂k of the state Xk given the observa-tions Y1, . . . ,Yk up to time k, according to either criterion(2) or (3), is given by

X̂k = E [Xk ∣ Yk, . . . ,Y1] , (6)

where v(i) = Iv(i)>1/2 for i = 1, . . . , d. This estimator and itsoptimal conditional MSE can be computed by the followingprocedure.

1. Initialization Step: The initial PDV is given by Π0∣0(i) =P (X0 = xi), for i = 1, . . . ,2d.

For k = 1,2, . . ., do:

2. Prediction Step: Given the previous PDV Πk−1∣k−1,the predicted PDV Πk∣k−1 is given by Πk∣k−1 =Mk Πk−1∣k−1.

3. Update Step: Given the current observation Yk =yk, let βk = Tk(yk)Πk∣k−1. The updated PDV Πk∣k

is obtained by normalizing βk to obtain a probabilitymeasure: Πk∣k = βk/∣∣βk ∣∣1.

4. MMSE Estimator Computation Step: The MMSE esti-mator is given by

X̂k = AΠk∣k (7)

with optimal conditional MSE

MSE(Y1, . . . ,Yk) = ∣∣min{AΠk∣k, (AΠk∣k)c}∣∣1 ,(8)

where the minimum is applied component-wise, andXk

c(i) = 1 −Xk(i), for i = 1, . . . , d.

3. PARTICLE FILTERING ALGORITHM

Application of the BKF may become impractical computa-tionally if the dimensionality of the state vector is too high.For example, with 16 boolean variables in the state vector,the total size of the state space is 216 = 65,536, and theprediction matrix M is of size 65,536 × 65,536, requiring216 × 216 × 8 = 235 ≈ 34 GB of storage space.

We consider here a sequential Monte-Carlo approach,popularly known as a particle filter [3], in order to obtain anapproximate solution to the optimal state estimation problem.We employ P (Xk ∣ Xk−1) as the proposal distribution forgenerating particles xk,i, and compute the particle weightsQk,i via the likelihood pvk

(h(Xk,vk) = Yk ∣ Xk). Resam-pling of the particles is done via multinomial sampling when-ever the efficient sample size Neff = (∑N

i=1Q2k,i)−1 is smaller

than a minimum size. The algorithm can be summarized asfollows.

1. Initialization Step: Set N (number of particles), andNT (minimum effective sample size that triggers re-sampling).

Sample N particles from the initial state distribution:x0,i ∼Multinomial(N,Π0∣0), for i = 1, . . . ,N .

For k = 1,2, . . ., do:

2. Prediction Step: Generate new particles from the oldparticles using the state process: xk,i = f(xk−1,i)⊕nk,for i = 1, . . . ,N .

3. Update Step: Given the current observation Yk, calcu-late the weights Q̃k,i = pvk

(h(xk,i,vk) = Yk ∣ Xk = xk,i),for i = 1, . . . ,N . The updated weights are given byQk,i = Q̃k,i/∑N

j=1 Q̃k,j , for i = 1, . . . ,N .

4. Resampling Step: If Neff = (∑Ni=1Q

2k,i)−1 < NT then

obtain new particles xk,i ∼ Multinomial(N,Qk,1, . . . ,Qk,N), for i = 1, . . . ,N . Set Qk,i = 1/N , for i =1, . . . ,N .

5. PDV calculation: The updated PDV Πk∣k is obtainedby Πk∣k(i) = ∑{j∣xk,j=i}Qk,j , for i = 1, . . . ,2d.

6. Approximate MMSE Estimator Computation Step:The approximate MMSE estimator is given by

X̂k = AΠk∣k (9)

with optimal conditional MSE

MSE(Y1, . . . ,Yk) = ∣∣min{AΠk∣k, (AΠk∣k)c}∣∣1 ,(10)

Note that the MMSE estimator computation step is thesame as in the BKF, except that the result here is approximatedue to the Monte-Carlo computation.

82

Page 3: [IEEE 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP) - Austin, TX, USA (2013.12.3-2013.12.5)] 2013 IEEE Global Conference on Signal and Information Processing

4. NUMERICAL EXPERIMENTS

Here we illustrate the particle filtering approach describedin the previous section by means of the following simpleBoolean signal model, where both state and observation vec-tors are Boolean and of the same dimensionality.

Xk = f (Xk−1) ⊕ nk (state model)

Yk = Xk ⊕ vk (observation model)(11)

for k = 1,2, . . .. The network function f is selected ran-domly in our experiments, and the noise is i.i.d, nk(i) ∼Bernoulli(p) and vk(i) ∼ Bernoulli(q), for i = 1, . . . , d,where 0 < p, q < 0.5. The noise parameter p gives the amountof “perturbation” to the Boolean state process; the closer itis to p = 0.5, the more chaotic the system will be, while avalue of p close to zero means that the state trajectories arenearly deterministic, being governed tightly by the networkfunction. Here, we set p = 0.05. On the other hand, q give theintensity of the observation noise, being related to its varianceq(1 − q). The noise is maximal at q = 0.5, while a value ofq close to zero means that the observations are tightly cor-related with the clean state signal. Estimation performanceshould be monotone with q, being always better the smaller qis. Here, we consider a few values of q, to examine the effectof observation noise on performance.

We employ N = 1000 particles and a threshold for ef-ficient sample size NT = 0.1N = 100. The simulation iscarried out by sampling the initial state from the a uniforminitial state distribution, and running the signal model in (11)to generate sample state and observation sequences Xk andYk, respectively, for k = 1, . . . ,100. The observation se-quence Yk is fed to the particle filter to obtain an estimatedstate sequence X̂k. We also calculate the estimated error rateε = 1/100∑100

k=1 I(Xk ≠ X̂k), where I is the indicator func-tion. Values of ε close to zero indicate accurate estimationperformance.

Figure 1(a) and (b) display results for d = 8, the andq = 0.01 (clean observations) and q = 0.1 (noisy observa-tions), both run on the same randomly generated networkfunction. Displayed are the true (solid line) and estimated(dashed line) state paths, as well as the conditional MSE. Wecan see that the clean observations produce a much betterresult, with smaller MSE and error rate ε. It appears that thenumber of particles N = 1000 is insufficient to handle thenoisy observation case.

Interestingly, running the filters again with the networkfunction and all parameters fixed, but without a resamplingstep, produces a smaller error rate for the clean observationcase, ε = 6.06%, and a larger error rate for the noisy obser-vation case, ε = 58.58%, which seems to indicate that resam-pling helps in the noisy case, but is actually detrimental inthe low-noise case. To verify this, we ran the particle filteron a system with d = 10 variables, p = 0.05, and q = 0.01

(a)

(b)

Fig. 1. Particle filtering results for (a) clean observations (q =0.01), (b) noisy observations (q = 0.1). The error rates areε = 14.04% and ε = 48.48%, respectively. Solid line: truestate path. Dashed line: estimated state path. Also displayedare the corresponding conditional MSEs.

(clean observations), and plotted the error rate as a functionof the number of particles used, both with and without theresampling step. In the former case, NT = 0.1N , as be-fore. Figure 2 displays the plots. We can see that the errorrate generally decreases with an increasing number of parti-cles, which is expected, but we also see that the error ratesare lower or the same without introducing a resampling step.With a small number of particles compared to the dimension-ality of the problem, resampling appears to be detrimental toperformance.

5. CONCLUSION

We presented in this paper a particle filtering approach to op-timal state estimation in Boolean dynamical systems. Suchan approach is necessary in the case of high-dimensional sys-tems, where exact computation of the estimator may not befeasible. Numerical examples demonstrated the performanceof the method under different noise conditions, number ofparticles, dimensionality, in addition to the presence or ab-sence of a resampling step.

The use of a resampling step appears to be detrimentalunder low-noise conditions and smaller number of particles.

83

Page 4: [IEEE 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP) - Austin, TX, USA (2013.12.3-2013.12.5)] 2013 IEEE Global Conference on Signal and Information Processing

Fig. 2. Error rates as a function of number of particles.

More work is necessary to confirm such observations. In ad-dition, future work will investigate the use of the particle fil-tering approach in the system identification problem, whereunknown parameters of the network function are estimatedfrom the observations.

6. REFERENCES

[1] N. Gordon, D. Salmond, and A. Smith, “Novel ap-proach to nonlinear/non-gaussian bayesian state estima-tion,” IEE-Proceedings-F, vol. 140, no. 2, pp. 107–113,1993.

[2] S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp,“A tutorial on particle filters for on-line nonlinear/non-gaussian bayesian tracking,” IEEE Transactions on Sig-nal Processing, vol. 50, no. 2, pp. 174–188, 2002.

[3] D. Simon, Optimal State Estimation: Kalman, H Infin-ity, and Nonlinear Approaches. New York, NY: Wiley-Interscience, 2006.

[4] U. Braga-Neto, “Optimal state estimation for Boolean dy-namical systems,” 2011. Proceedings of 45th AnnualAsilomar Conference on Signals, Systems, and Comput-ers, Pacific Grove, CA.

[5] U. Braga-Neto, “Joint state and parameter estimation forBoolean dynamical systems,” 2012. Proceedings of theIEEE Statistical Signal Processing Workshop (SSP’12),Ann Arbor, MI.

84