5.0 scalar, discrete time kalman filter 5.2 filter … · present the matrix formulation of the...

47
© M. Budge, Jr – Jan 2018 43 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.1 Introduction Before proceeding with a formal development of the Kalman filter we will present a heuristic development for the scalar case to illustrate some of the features of the Kalman filter. We first present the filter development and then study, in some detail, the structure and properties of the Kalman filter. We next consider a simple example to illustrate some of the properties. Finally, we present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the Kalman filter. 5.2 Filter Development 5.2.1 Background Consider a system of the form x 1 x w k F k G k (5-1) where x k is the system state and F and G are known constants. w k is termed the system input, or system disturbance, and is a zero-mean, white, random process that is uncorrelated with the initial state, x0 , and has a covariance of Qk . We wish to know the values of x 1 k . However, x 1 k is not directly measurable, or observable. Instead we can only measure the quantity, y 1 k , which is related to x 1 k by the equation y 1 x 1 v 1 k H k k (5-2) where H is a known constant. v 1 k is a zero-mean, white, random process that is uncorrelated with w k and the initial state and has a covariance of 1 Rk . v 1 k is termed the measurement noise. The fact that we chose a measurement index of 1 k in our problem definition is arbitrary. We could have also used a measurement index of k. In general, if we are building Kalman filters for the so-called tracking problem, we use a measurement index of 1 k . The tracking problem arises in radar, sonar and other problems where we are using the Kalman filter as part of a target tracker to estimate the target state (position, velocity, acceleration, etc.) from certain measurements. The use of the 1 k index is usually based on the fact that we want to know the target state in the future so that we know where to look when we make measurements. We usually use a measurement index of k when we use the Kalman filter in a control theory situation. That is, to estimate the states of a system so that we can implement some sort of control law based on the states, such as an

Upload: others

Post on 01-Aug-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 43

5.0 SCALAR, DISCRETE TIME KALMAN FILTER

5.1 Introduction

Before proceeding with a formal development of the Kalman filter we will present a heuristic development for the scalar case to illustrate some of the features of the Kalman filter. We first present the filter development and then study, in some detail, the structure and properties of the Kalman filter. We next consider a simple example to illustrate some of the properties. Finally, we present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the Kalman filter.

5.2 Filter Development

5.2.1 Background

Consider a system of the form

x 1 x wk F k G k (5-1)

where x k is the system state and F and G are known constants. w k is

termed the system input, or system disturbance, and is a zero-mean, white,

random process that is uncorrelated with the initial state, x 0 , and has a

covariance of Q k .

We wish to know the values of x 1k . However, x 1k is not directly

measurable, or observable. Instead we can only measure the quantity, y 1k ,

which is related to x 1k by the equation

y 1 x 1 v 1k H k k (5-2)

where H is a known constant. v 1k is a zero-mean, white, random process

that is uncorrelated with w k and the initial state and has a covariance of

1R k . v 1k is termed the measurement noise.

The fact that we chose a measurement index of 1k in our problem

definition is arbitrary. We could have also used a measurement index of k. In general, if we are building Kalman filters for the so-called tracking problem, we

use a measurement index of 1k . The tracking problem arises in radar, sonar

and other problems where we are using the Kalman filter as part of a target tracker to estimate the target state (position, velocity, acceleration, etc.) from

certain measurements. The use of the 1k index is usually based on the fact

that we want to know the target state in the future so that we know where to look when we make measurements.

We usually use a measurement index of k when we use the Kalman filter in a control theory situation. That is, to estimate the states of a system so that we can implement some sort of control law based on the states, such as an

Page 2: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 44

optimal control law. The fact that we use a measurement index of k is based on the fact that when we write the state variable equations of the system, for control theory purposes, the index of the measurement equation is k.

The details of the Kalman filter formulations for measurement indices of

k and 1k are somewhat different. In this chapter we will derive the

formulation for the measurement index of 1k and present the form of the

Kalman filter for a measurement index of k. It is left as an exercise for the reader to derive the Kalman filter for the case where the measurement index is k.

5.2.2 Problem Definition

Since we can’t measure x 1k directly we want to devise a means of

estimating it. Since the only data we have are the measurements, y 1 , y 2 ,

… y 1k , it seems logical that we should formulate the estimate of the state in

terms of them. Specifically, we let

x 1 y 1 , y 2 , y 1k g k . (5-3)

We note that we know the values of F, G and H , as well as the values of Q k

and 1R k k . We keep this in mind in case we need to use them later.

Equation 5-3 states that the estimate will be some function of the measurements. However, we have not specified the nature of the function. In

general we could let g be some non-linear function. However, past

experience tells us that working with non-linear functions is very difficult.

Therefore, we will settle for g being a linear function and write the state

estimate as

1

,

1

x 1 yk

m k

m

k a m

. (5-4)

In Equation 5-4 we want to choose the ,m ka so that x 1k is the “best”

linear estimate of x 1k . Since we want our estimate to be the best estimate

of the state we need some type of criterion for measuring the goodness of our estimate. Although there are many criteria, the one we will choose is the mean-

squared error between the actual state, x 1k , and the estimate, x 1k .

That is, we choose the ,m ka of Equation 5-4 so that we minimize

2ˆ1 x 1 x 1P k E k k (5-5)

Page 3: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 45

where ˆx 1 x 1k k is the error between the state and the estimate. We

chose the expectation to obtain the mean because both the state and the estimate are random processes.

To recapitulate the above, we wish to find an estimate of the state by forming a linear combination of the measurements in such a fashion as to minimize the mean-squared error between the actual state and its estimate. This problem is commonly referred to as linear, mean-squared estimation and

the estimate, x 1k , is commonly referred to as a linear, mean-squared

estimate.

Now that the problem is formulated all that remains is to determine the

,m ka . A brute force method of doing this is to substitute Equation 5-4 into

Equation 5-5, take the partial derivatives with respect to all of the ,m ka , set the

partial derivatives to zero and solve the resulting set of simultaneous equations. This is essentially the approach we will use. However we will state the problem somewhat differently.

5.2.3 Orthogonality Condition

The action of taking the partial derivatives and setting them to zero leads to a condition that is termed the orthogonality condition. For now we simply state the orthogonality condition and use it. We will prove it in the Chapter 7.

ORTHOGONALITY CONDITION: Given a linear estimator of the form of Equation 5-4, a necessary and sufficient condition for minimization of the mean-squared error of

Equation 5-5 is that the ,m ka be chosen so that the error,

ˆx 1 x 1k k , be orthogonal to the measurements, y 1 ,

y 2 , … y 1k . That is, that

ˆx 1 x 1 y 0 1, 1E k k m m k . (5-6)

If the ,m ka are chosen to satisfy the orthogonality condition, the minimum error

is given by

ˆ1 x 1 x 1 x 1P k E k k k . (5-7)

The derivation of Equation 5-7 is left to the reader.

5.2.4 Problem Solution

We will now use the orthogonality condition to derive the estimator. We do this by what we will call a heuristic, induction type of derivation. That is, we

will derive the estimator for 0k and 1 and then extrapolate the results to the

Page 4: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 46

general case. In the Chapter 7 we will present a rigorous, formalized development of the Kalman filter.

From Equation 5-4 with 0k we get

1,0x 1 y 1a (5-8)

and from Equation 5-6 we want to choose a1,0 such that

ˆx 1 x 1 y 1 0E . (5-9)

Substituting for x 1 from Equation 5-8 yields

1,0x 1 y 1 y 1 0E a . (5-10)

Furthermore, with the substitutions

x 1 x 0 w 0F G (5-11)

and

y 1 x 1 v 1 x 0 w 0 v 1H HF HG (5-12)

we get

1,0x 0 w 0 x 0 w 0 v 1

x 0 w 0 v 1 0

E F G a HF HG

HF HG

(5-13)

Performing the indicated operations of Equation 5-13 yields

2 2

1,0 1,0

2 2

1,0 1,0

2

1,0 1,0

1 x 0 2 1 x 0 w 0

1 2 x 0 v 1 1 w 0

1 2 w 0 v 1 v 1 0

HF a H E FHG a H E

F a H E HG a H E

G a H E a E

(5-14)

In order to simplify Equation 5-14 we recall the conditions we imposed upon

input random process, w k , and the measurement noise process, v k .

Specifically, we stated that w k was zero-mean, white and uncorrelated with

x 0 . We also stated that v k was zero-mean, white and uncorrelated with

w k and x 0 . These translate to the following statements

w 0E k , (5-15)

w w sE k m Q k k m , (5-16)

w x 0 0E k k , (5-17)

v 0E k , (5-18)

Page 5: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 47

v v sE k m R k k m , (5-19)

v x 0 0E k k and (5-20)

v w 0 ,E k m k m . (5-21)

By extending the results of Section 4.2, we can combine Equations 5-16 and 5-17 to get

x w 0E k m k m . (5-22)

Also, we can combine Equation 5-19, 5-20 and 5-21 to get

x v 0E k m k m . (5-23)

Equations 5-21, 5-23 and 5-24 allow us to eliminate the crosscorrelation terms from Equation 5-14 to yield

2 2 2 2 2

1,0 1,0 1,01 x 0 1 w 0 v 1 0HF a H E HG a H E a E . (5-24)

We next want to examine the first term of Equation 5-24. The initial

error, 0P is given by 2 2

0ˆ0 x 0 x 0 x 0P E E P since

x 0 x 0 0E . In the alternate development presented in the appendix we

will remove the condition that x 0 0 .

With the above, and Equations 5-16 and 5-19, we can rewrite Equation 5-24 as

2 2

1,0 1,0 1,01 0 1 0 1 0HF a H P HG a H Q a R (5-25)

where 2

00 x 0 x 0P P E E . Finally, we can solve Equation 5-25 for

a1,0 to yield

1

2 2 2 2 2 2

1,0 0 0 0 0 1a HF P HG Q H F P H G Q R

. (5-26)

The quantity 1,0a is called the Kalman gain, which we henceforth denote as

1K k ( 0k in this case). With this the estimate of the state for 0k is

given by

x 1 1 y 1K (5-26)

where

1

2 2 2 2 2 21 0 0 0 0 1K HF P HG Q H F P H G Q R

. (5-27)

Now that we have the equation for our estimate, we want to derive the

equation for the mean squared error on the estimate, P 1 . From Equation 5-7

the mean squared error is given by

Page 6: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 48

ˆ1 x 1 x 1 x 1P E . (5-28)

Substituting for x 1 and x 1 and using Equations 5-11, 5-12 and 5-26, we get

1 x 1 1 y 1 x 1

x 0 w 0 1 x 0 w 0 v 1 x 0 w 0

P E K

E F G K HF HG F G

.(5-29)

Performing the indicated operations and making use of previous relationships, we can write Equation 5-29 as

2 21 1 1 0 0P K H F P G Q . (5-30)

In summary, given a single measurement, y 1 , the best linear estimate,

x 1 , of the state, x 1 , in a minimum mean squared sense, is

x 1 1 y 1K (5-31)

where 1K is the Kalman gain and is given by

1

2 2 2 2 2 21 0 0 0 0 1K HF P HG Q H F P H G Q R

. (5-32)

The mean squared error, 1P , associated with the estimate is given by

2 21 1 1 0 0P K H F P G Q (5-33)

where

2

00 x 0 x 0P P E E (5-34)

and P0 is the variance on the initial state. The above estimate is valid under the

conditions that the system and measurement noises are zero-mean and white, and are uncorrelated with each other and with the initial state.

Next we derive equations for the estimate for 1k . That is, we want to

find the 1,1a and

2,1a such that the linear estimate

1,1 2,1x 2 y 1 y 2a a (5-35)

minimizes the mean squared error

2ˆ2 x 2 x 2P E . (5-36)

We begin the solution to this problem by noting that we can use Equation 5-31

to replace y 1 by x 1 1K and subsequently rewrite Equation 5-35 as

2,1ˆ ˆx 2 x 1 y 2b a . (5-37)

Page 7: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 49

This step is subtle but significant in its ramifications in that it means that the estimator need not store the previous measurements. Rather, it needs only store the previous estimate and the new measurement. This greatly reduces the data storage and computational requirements, and makes real-time filtering of data feasible. This is one of the significant contributions offered by the Kalman filter. While previous measurements are not explicitly stored their impact is still present through the use of the previous estimate.

To derive the coefficients b and 2,1a we again use the orthogonality

condition, suitably modified to allow incorporation of x 1 . This leads to the

equations

ˆ ˆx 2 x 2 x 1 0E (5-38)

and

ˆx 2 x 2 y 2 0E . (5-39)

Substituting Equation 5-37 into Equation 5-39, and making use of the state equation (Equation 5-1) we get

2,1

2,1

ˆ ˆx 1 w 1 x 1 y 2 x 1

ˆ ˆx 1 w 1 x 1 x 1 w 1 v 2 x 1 0

E F G b a

E F G b a H F G

. (5-40)

We can expand Equation 5-40 , making use of Equations 5-15 through 5-23 to get

2,1ˆ ˆ ˆ ˆx 1 x 1 x 1 x 1 x 1 x 1 0FE bE a HFE . (5-41)

Using the fact that ˆx 1 x 1 y 1 0E and x 1 1 y 1K , we deduce that

ˆ ˆx 1 x 1 x 1 0E and that ˆ ˆ ˆx 1 x 1 x 1 x 1E E . Next we can use

this in Equation 5-41 to arrive at the relationship

2,1b F a HF . (5-42)

With appropriate substitutions into, and manipulation of, Equation 5-39 we get

2 2 2 2

2,1

2 2

2,1 2,1

ˆx 1 x 1 1 x 1 x 1 x 1 x 1

1 2 0

HF E HG Q bHFE a H F E

a h G Q a R

. (5-43)

We next make use of Equations 5-42 and 5-28 in Equation 5-43 and perform the appropriate manipulations to obtain

1

2 2 2 2 2 2

2,1 1 1 1 1 2a HF P HG Q H F P H G Q R

. (5-44)

Page 8: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 50

We note the Equation 5-44 has the same form as Equation 5-32 and that 2,1a is

the multiplier for y 2 . Thus, we recognize 2,1a as the Kalman gain, 2K . If

we substitute Equation 5-42 into Equation 5-37 and replace 2,1a by 2K we get

ˆ ˆ ˆx 2 x 1 2 y 2 x 1F K HF (5-45)

where

1

2 2 2 2 2 22 1 1 1 1 2K HF P HG Q H F P H G Q R

. (5-46)

We will discuss the form and nature of Equation 5-45 a little later in this chapter.

The derivation of the equation for the mean squared error, 2P ,

associated with the estimate is very similar to the derivation we used for 1P .

Specifically, we recognize that ˆ2 x 2 x 2 x 2P E and make

appropriate substitutions for x 2 and x 2 . The result is

ˆP 2 x 1 w 1 x 1

ˆ2 x 1 w 1 v 2 x 1 x 1 w 1

E F G F

K H F G HF F G

. (5-47)

Finally, after appropriate manipulation, Equation 5-47 reduces to

2 22 1 2 1 1P K H F P G Q (5-48)

which is of the same form as Equation 5-33.

In summary, given a second measurement, y 2 and the previous best

linear estimate x 1 , the best linear estimate, x 2 , of the state, x 2 , in a

minimum mean squared sense, is

ˆ ˆ ˆx 2 x 1 2 y 2 x 1F K HF (5-49)

where 2K is the Kalman gain and is given by

1

2 2 2 2 2 22 1 1 1 1 2K HF P HG Q H F P H G Q R

. (5-50)

The mean squared error, 2P , associated with the estimate is given by

2 22 1 2 1 1P K H F P G Q (5-51)

If we were to proceed to derive the form of the estimation equation,

Kalman gain and mean squared error for the case of 2, 3,k we would get

equations of the form of Equations 5-49, 5-50, and 5-51 with the indices

Page 9: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 51

increasing by one as k increased by one. Therefore, we can use Equations

5-49, 5-50, and 5-51 to define the general form of the scalar Kalman filter.

5.2.5 Summary

Suppose we have a system that we represent by the discrete-time scalar model

x 1 x wk F k G k . (5-52)

Suppose further that the model of the system measurements is given by

y 1 x 1 v 1k H k k . (5-53)

In Equation 5-52, w k is a system input disturbance and is assumed to be a

white, zero-mean, random process with a covariance of Q k . In Equation 5-53,

v k 1 is the measurement noise and is assumed to be a white, zero-mean,

random process with a covariance of 1R k . We assume that the initial state,

x 0 , has a variance of 0P . We further assume that w k and x 0 are

uncorrelated for all k , v k and x 0 are uncorrelated for all k , and w k and

v m are uncorrelated for all k and m .

Under the above conditions, the linear estimate, x 1k , of the system

state, x 1k , that minimizes the mean squared error, 1P k , between the

estimate and the state is given by the equation

ˆ ˆ ˆx 1 x 1 y 1 xk F k K k k HF k (5-54)

where 1K k is the Kalman gain and is given by

1

2 2 2 2 2 21 1K k HF P k HG Q k H F P k H G Q k R k

. (5-55)

The minimum mean squared error associate with the estimate, x 1k , is given

by the equation

2 21 1 1P k K k H F P k G Q k . (5-56)

In the above equations we assume that x 0 x 0E and that 00P P .

Equations 5-54, 5-55 and 5-56, along with the initial conditions of the previous sentence, are termed the Kalman filter for the system and measurement represented by model of Equations 5-52 and 5-53, with the assumptions stated above.

Page 10: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 52

5.3 Kalman Filter Structure and Properties

5.3.1 Introduction

We will now take some time to discuss the Kalman filter equations. We first note that we changed the definitions of Equations 5-52 and 5-53. When we stated them at the beginning of this chapter we said that they were the system equation and the measurement equation, respectively. However, above we termed them models for the system and measurement. This is an often overlooked and important point. In fact, we want to estimate states of some system given measurements we obtain from the system. In order to build a Kalman filter for the system we need a model for it. We also need a model that describes how the measurement relates to the state. If we are lucky, the system and measurements will be accurately described by our model. If this is the case then, as we will show shortly, the Kalman filter will do a good job of estimating the states. However, if our model doesn’t accurately describe the system and/or measurement, the Kalman filter may not perform well. In fact, one of

the things we normally do in practice is to use the system disturbance, w k ,

and measurement error, v k , to attempt to characterize the uncertainty that

we may have in our system and/or measurement models.

Recall the example in the introduction where we discussed the fact that the resistors came from a box of resistors that had a 1% tolerance. This 1%

would be captured in Q k , the covariance on w k . Furthermore, our

inability to accurately read the ohmmeter would be captured in R k , the

covariance on v k . This concept that we design Kalman filters based on a

model of the system and measurement cannot be over emphasized. Failure to recognize this can often lead to considerable frustration in attempting to make a Kalman filter work the way we would like.

5.3.2 System and Measurement Models

We now want to discuss the conditions we imposed on w k , v k and

x 0 . Recall that these conditions were imposed to facilitate the derivation of

the Kalman filter. They were not derived from physical characteristics of the system or measurement process. This is somewhat disturbing and leads to the question of whether the assumptions are good ones and, if not, will incorrect assumptions affect the performance of the Kalman filter.

The condition that w k be white is often difficult to support. In fact, as

we will see in the radar tracking example later, w k is most definitely not

white. In fact, we include w k in the problem definition of the radar tracking

problem to give us parameters that we can use to “tune” the Kalman filter. Similar statements are usually true for other applications of the Kalman filter.

A standard approach to addressing the problem of whether or not w k is white

Page 11: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 53

is to ignore it and hope for the best. If, in fact, one can trace a performance

problem with the Kalman filter to the fact that w k is not white, there are

ways of augmenting the system model to include a non-white form of w k . We

will discuss how to do this in a later chapter. A non-zero mean w k can also

be easily incorporated into the Kalman filter design, provided the mean is known. This will also be discussed in a later chapter.

The condition that v k is white is usually fairly easy to support. In

fact, in many applications the system designer can design the system

measurement process to assure that v k is white, or at least very broadband.

As with, w k , if v k is not white there are ways of augmenting the system

and measurement model to include a non-white form of v k . We will discuss

how to do this in a later chapter. As with, w k , another means of

accommodating the fact that v k is non-white is to ignore this fact and see

how well the filter works. The fact that v k is not zero-mean can also be easily

incorporated into the Kalman filter design, provided the mean is known. This will also be discussed in a later chapter.

The assumption that w k and v k are uncorrelated is usually a good

one simply because of the fact that these random processes derive from different sources. In fact, one of the standard assumptions is that they are independent. The author is not familiar with any means on incorporating

correlated w k and v k into the Kalman filter design. Therefore, if w k and

v k are indeed correlated, the Kalman filter designer must live with this fact

and hope that the Kalman filter will work well.

The assumption the w k and v k are uncorrelated with the initial

state is usually a good one, again, because of the fact that the sources of w k ,

v k and x 0 are different. However, having said this, if w k is non-white it

could very well be correlated with x 0 . The same could be true for v k . As a

practical note, it would be very difficult to show that w k and/or v k are

correlated with x 0 . Based on this, a standard assumption is to take it on

faith that w k and v k are uncorrelated with the initial state. As with the

correlation between w k and v k , the author knows of no way to incorporate

the fact that w k and/or v k are correlated with the initial state.

While some of the above discussions regarding the system and

measurement models, and w k , v k and x 0 , may be disturbing to some

readers, they are included as a practical matter since ignorance of them can be

Page 12: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 54

the source of considerable difficulties in designing and implementing Kalman filters. Inaccurate system and measurement models are most often the problems that experienced Kalman filter designers must deal with. In particular, accounting for these inaccuracies is often one of the major factors that must be considered in Kalman filter design. The same can be said about

the whiteness of w k . The problems with the whiteness of v k is often a

pitfall of the novice Kalman filter designer. These designers are often under the mistaken impression that they can improve Kalman filter performance by increasing the number of samples taken over a period of time. What they don’t

realize is that this can cause v k to be correlated, which degrades the

performance of the Kalman filter.

5.3.3 The Kalman Filter Equations

Next let us turn our attention to the form of the Kalman filter equations.

The term xF k in Equation 5-54 is often termed the predicted state estimate

and is denoted x +1k k . If we were to have no measurement data but wanted

to estimate the state one stage in the future, the best thing that we could use is the state equation of Equation 5-52. However, this equation contains the term

w k , which we don’t know. Given no other information, our best choice for

w k would be its mean, which is zero. Thus, for want of any better equation,

a best guess of the state at stage 1k , given its estimate, x k , at stage k

would be given by

ˆ ˆ ˆx +1 x w xk k F k GE k F k . (5-57)

Suppose we wanted to characterize the error in the predicted estimate relative to the actual state. If we were to form this error term by subtracting Equation 5-57 from Equation 5-52 we would get

ˆ ˆx 1 x +1 x x wk k k F k k G k (5-58)

or

x 1 x wk k F k k G k , (5-59)

where the definition of x m k is obvious with the recognition that

ˆ ˆx =xk k k . We can use the results of Chapter 4 to find the covariance on

x 1k k . Specifically, since w k is white and w k , and x 0 are

uncorrelated and zero mean, we can write the covariance on x 1k k as (see

Equation 4-45)

2 21P k k F P k k G Q k (5-60)

Page 13: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 55

or

2 21P k k F P k G Q k . (5-61)

In the above we have made use of the following

21 x 1P k k E k k (5-62)

and

2 22 ˆ ˆx x x x xP k k E k k E k k k E k k P k . (5-63)

We note that the right side of Equation 5-61 appears in Equations 5-55 and 5-56.

We next turn our attention to the bracketed term on the right side of

Equation 5-54. The term xHF k is termed the predicted measurement at stage

1k and is denoted as y 1k k . Indeed, given the predicted state at stage

1k , the measurement equation of Equation 5-53, and the fact that v 1k is

zero mean, the best estimate of the measurement at stage 1k is

ˆ ˆ ˆy 1 x +1 v 1 xk k H k k E k HF k . (5-64)

Let us define the error between the actual and predicted measurement as

ˆy 1 y 1 y 1k k k k k (5-65)

or

ˆy 1 x 1 x +1 v 1k k H k k k k . (5-66)

With this we can derive the covariance on the error as

2

y

2 2 2 2

1 1 1

1

P k H P k k R k

H F P k H G Q k R k

(5-67)

which is the last term in Equation 5-55.

Let us return to Equation 5-54 and rewrite it with the terms we defined above. Specifically

ˆ ˆ ˆx 1 x 1 1 y 1 y 1

ˆ ˆx 1 1 y 1 x 1

k k k K k k k k

k k K k k H k k

. (5-68)

From this equation we see that we form the state estimate at stage 1k , which

we call the smoothed state estimate, by adding a correction term to the

predicted state estimate at stage 1k . The correction term is determined by

the difference between the actual measurement at stage 1k and the predicted

Page 14: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 56

measurement at stage 1k . The amount of weight we give to this error is

determined by the Kalman gain, 1K k . The Kalman gain, in turn, depends

upon the covariance on the error between the predicted and actual state and the covariance on the error between the predicted and actual measurement. In equation form,

2

y

1 11

1 1 1

HP k k HP k kK k

P k H P k k R k

. (5-69)

If 1R k is large relative to 2 1H P k k we note that 1K k approaches

zero and the smoothed state is biased toward the predicted state. The fact that

1R k is large relative to 2 1H P k k is our way of telling the Kalman filter

that we have much less faith in the measurement that in the system equations. As a result, we are inclined adjust the filter so that we don’t use much of the

information contained in the measurement. In the limit, as 1R k approaches

infinity, we completely ignore the measurement. This makes intuitive sense

because if we let 1R k approach infinity, we are saying that we have

absolutely no faith in the measurement. On the other hand, if 1P k k

approaches zero, we are saying that the predicted state estimate is, statistically speaking, exactly equal to the actual state. In this case, we should also ignore the measurement because it couldn’t improve on what is, statistically speaking, an already exact answer.

If 2 1H P k k is large relative to 1R k we note that 1K k

approaches 1 H and that the smoothed estimate approaches y 1k H . In

this case we note that the filter tends to ignore the predicted state estimate and

rely solely on the measurement. From Equation 5-69 we note that 1K k

approaches 1 H when we let 1R k approach zero. Indeed, if we let 1R k

approach zero we are saying that we believe that the measurement provides a very good indication of the actual state. Therefore, it makes intuitive sense that the Kalman filter should rely more heavily on it than on the predicted state.

Another way that 1K k can approach 1 H is for us to let 1P k k become

very large. If this happens, we are saying that we have very little faith in the predicted state estimate. If this is the case, it makes sense that the Kalman

filter should place little emphasis on it and, instead, use the measurement.

We now turn our attention to the covariance equation of Equation 5-56.

In this equation we note that as 1K k approaches zero that the covariance

on the smoothed state estimate approaches the covariance of the predicted

state estimate. With some thought this makes intuitive sense since 1K k

approaching zero is tantamount to assuming that the predicted state estimate is much better than the measurement. If this is the case, the smoothed state estimate is taken to be the predicted state estimate and, thus the covariance on

Page 15: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 57

the smoothed state estimate should be almost equal to the covariance on the predicted state estimate.

As 1K k approaches 1 H we note that the covariance on the smoothed

state estimate appears to approach zero. In fact, what we will soon learn from some experiments is that the covariance on the smoothed state estimate

approaches 21R k H . (It is left as an exercise for the reader to show that

2

1lim 1 1

P k kP k R k H

.) Again, this makes intuitive sense because for

the case where 1K k approaches 1 H the smoothed state estimate depends

much more heavily on the measurement than on the predicted state estimate. Therefore, the covariance on the smoothed state estimate should be determined

by 1R k and not 1P k k .

5.3.4 Summary

In the above discussions we left the impression (or tried very hard to leave the impression) that the Kalman filter designer controls the operation of the Kalman filter through the various filter parameters. This often disturbs Kalman filter novices. Because of ignorance, they are under the mistaken impression that they can loosely define the parameters on the system and noises, and expect the Kalman filter to work well. Nothing could be further from the truth. In fact, even when designers very carefully specify system models and noise characteristics, the Kalman filter may only provide mediocre performance. What designers often find is that after very careful design of the Kalman filter, they must expend considerable time and energy tuning it through simulation and, when possible, actual application. This doesn’t stem from an inadequacy in Kalman filtering theory but from the fact that it is very difficult to accurately model systems and noises. Having made all of these negative comments, it is generally accepted that Kalman filters are some of the easiest to design and best state estimators currently available. This rather bold statement is borne out by the wide use Kalman filters currently receive.

5.4 Example 5-1 – A First-order Kalman Filter

In this section we present a few examples to illustrate the Kalman filter properties discussed in the previous section. For the first example we consider

a system where the state transition parameter is 0.97F and the input

distribution parameter is 1G . We assume our measurement is the state,

x k , corrupted by noise. With this, our system model is

x 1 0.97x wk k k (5-70)

and our measurement model is

y 1 x 1 v 1k k k . (5-71)

Page 16: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 58

We let the input disturbance be zero-mean, white, Gaussian1 noise with

a variance of 0.001Q k . The measurement noise is also zero-mean, white,

Gaussian noise and has a variance of 1 0.01R k . The initial value of the

actual state is 1.0 but we assume we didn’t know this when we formulated the filter.

With this the Kalman filter equations are given by

ˆ ˆ ˆx 1 x 1 y 1 xk F k K k k HF k ,

(5-72)

1

2 2 2 2 2 21 1K k HF P k HG Q k H F P k H G Q k R k

, (5-73)

and

2 21 1 1P k K k H F P k G Q k , (5-74)

with 0.97F , 1G , 1H , 0.001Q k and 1 0.01R k . Since we don’t

know the value of the actual initial state, we assume it is zero and thus let

x 0 0 . We represent our lack of knowledge of the initial state by letting

0 10P . From our earlier discussions, this will force the initial Kalman gain

to 1 H and cause the Kalman filter to initially ignore the predicted state

estimates and use only the system measurements.

The results of a simulation of the model and Kalman filter are shown in Figure 5-1. In this figure, the top graph contains plots of the actual state (dashed line) and the state estimate (solid line). The second graph is a plot of

the Kalman gain, 1K k , and the third graph is a plot of the covariance,

P k . Finally, the last graph is a plot of the actual state and the measurement.

As can be seen from the plots, the Kalman filter does a very good job of estimating the state of the system. For the first couple of stages the Kalman gain is one and forces the Kalman filter to use the measurement as the state estimate. This is indicated by the fact that the value of the state estimate at

1k is x 1 0.8 , which is the same value as the measurement. For 2k the

value of the Kalman gain has dropped to about 0.5. In this case, the state

estimate at 2k is not quite the same value as the measurement at 2k since

the filter is beginning to use a combination of its state estimate and the

measurement. As time progresses, the Kalman gain decreases until it levels off at about 0.25. This means that the Kalman filter is forming new estimates by weighing new measurements by about 25% and previous state estimates by about 75%. This makes sense since the combined state and disturbance

1 Note that we have added the caveat that the noise be Gaussian. We did this here because Gaussian noise is easy to generate on the computer. It is not a necessary requirement for the Kalman filter. Later we will address the ramifications of assuming Gaussian noise.

Page 17: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 59

covariance (i.e. 2 2 , 1F P k G Q k H ) is about 1/3 of the measurement

covariance.

It will be noted that the estimate covariance reaches a steady state value of 0.0025. This is 1/4 of the measurement covariance and represents a fairly significant noise reduction. The fact that the Kalman gain and estimate covariance reach constant values is due to the assumption that the system

disturbance and measurement covariances ( , 1Q k R k ) are constant.

Figure 5-2 contains plots for the case where we have increased the

measurement noise by a factor of 10 so that 1 0.1R k . The rest of the

system and filter parameters remain the same as in the previous example.

Figure 5-1. Plots for Example 1 - R=0.01

Page 18: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 60

Again the Kalman filter is doing a good job of estimating the state. However, in this case, the Kalman gain is less than 0.1 rather than the value of 0.25 of the previous example. This means that the Kalman filter is giving less weight to the measurements and is relying more on the system model. Again, this makes sense since the measurements are very noisy (as indicated by the bottom plot) and are thus not as reliable as in the previous example.

We note that the estimate covariance has increased from 0.0025 in the previous example to about 0.0075 in this example. This makes sense since the increased measurement noise means that the estimate will be less reliable. We also interestingly note that the noise reduction is larger in this example than in

Figure 5-2. Plots for Example 2 - R=0.1

Page 19: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 61

the previous example. In this example the estimate covariance is about 1/13 of the measurement covariance.

The results of a third example are contained in Figure 5-3. In this case

we have reduced the measurement covariance to 1 0.0001R k , meaning that

we have almost noise free measurements of the state. In this case the Kalman filter again does an excellent job of estimating the state. The Kalman gain is about 0.9, which means that the measurements are being given much more weight that the system model. This is evidenced by the fact that the state estimate of the top plot is virtually identical to the measurement shown in the bottom plot. We also note that the estimation covariance is close to 0.0001, which is the value of the measurement covariance.

Figure 5-3. Plots for Example 3 - R=0.0001

Page 20: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 62

In the above examples we always assumed that the parameters we used to design the Kalman filter were truly representative of the actual system parameters, the system disturbance and the measurement noise. As a result, the Kalman filter behaved as expected. That is, it properly combined the measurements and state information to provide good estimates of the states. Also, the estimate covariance was indicative of how well the state estimates compared to the actual state.

In most applications of Kalman filters we don’t have perfect knowledge of the system or the disturbances that perturb the system. In fact, as indicated earlier, our system model often consists of a Taylor series approximation of the system dynamics and we have no real knowledge of how to incorporate the system disturbances into this model. As a result, we use the system

disturbance covariance, Q k , to characterize model uncertainties. We explore

this further in our next set of examples.

For this set of examples we consider the same system as before, except for the fact that we don’t know the value of the state transition parameter. We

assume that it is 0.9F rather than its actual value of 0.97. We have also

incorrectly determined that the covariance on the system disturbance is

0.0001Q k rather than its actual value of 0.001Q k . Fortunately, we have

made a correct assumption of our measurement model and the covariance on the measurement noise. We have also properly identified the value of the input

distribution parameter as 1G . As a result of the above, the model we use is

as described by Equations 5-70 and 5-71 with 0.97 replaced by 0.9. The outputs of the simulation for this case are shown in Figure 5-4. As indicated, the filter performs very poorly in this situation. The state estimate is significantly different that the actual state. To aggravate matters, the estimate covariance is very small, which gives the impression that the filter is doing a good job of estimating the state. The problem in this instance is a combination

of factors. Since we set 0.0001Q k and 1 0.01R k we are telling the filter

that the measurements are very noisy relative to the system disturbances. For this reason, the Kalman filter applies more weight to the system model we have supplied. This is evidenced by the fact that the Kalman gain is very small. One of the tacit assumptions the Kalman filter makes is that the system model is correct. Thus, after it initially uses measurements to determine the state, it starts to ignore them and coast the filter based on the system model. If the

system model is good, the filter will generally work well. However, if the system model is not good, as in this example, the filter will not work well. It must be emphasized that the Kalman filter will not know that it is not working well because it does not have knowledge of the actual state (as we did in our simulation). In practical situations, we, as users of the filter, will also not have access to the actual state. And, since the estimate covariance is small, relative to the measurement covariance, we will think that the filter is working well.

Page 21: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 63

A standard means of coping with system model problems is to “tune” the

filter by varying Q k . In essence, we design the Kalman filter using the best

system model we can find and then exercise the Kalman filter through simulation to see how well it performs. If the performance is not up to our

standards we vary Q k , design a new filter and try again.

Figure 5-5 shows the results of changing the Q k of the previous example to

0.1 instead of 0.0001. As can be seen, the filter appears to work much better. However, this is not actually true. If we compare the state estimate to the

Figure 5-4. Plots for Example 4 - Imperfect System Model, Q=0.0001

Page 22: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 64

measurement we note that they are almost the same. This means that the Kalman filter is providing very little filtering. This conclusion is supported by the fact that the Kalman gain is almost at its maximum value of one, and that the estimate covariance is almost equal to the measurement covariance.

A third example is illustrated in Figure 5-6. In this case we have set

Q k to 0.01. As can be seen, the filter works reasonably well. However, the

state estimate is still somewhat noisy. We may be able to improve the

performance by further adjusting Q k .

Figure 5-5. Plots for Example 5 - Imperfect System Model, Q=0.1

Page 23: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 65

An interesting issue with the last three examples regards the validity of the estimate covariance produced by the Kalman filter. For the case of Figure 5-4 the estimate covariance is clearly a poor estimate of the error between the estimated and actual state. However, in the examples of Figures 5-5 and 5-6 the estimate covariance appears to be a reasonably good estimate of the error between the estimated and actual state. In general, for examples like the last three, (where the system model is not good) one must build and test the Kalman filter to determine how well the estimate covariance represents the error between the estimated and actual state.

Figure 5-6. Plots for Example 6 - Imperfect System Model, Q=0.01

Page 24: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 66

The Matlab code used in this example is included in the “Programs” folder as Example5_1.m.

5.5 Extension to the Vector Case

Up to now we have developed the equations for the scalar Kalman filter and used them to study the properties of the Kalman filter. In order to study further Kalman filter implementations we need to extend the Kalman filter equations to the time varying, vector case. We will derive the equations for the time varying, vector Kalman filter in Chapter 7. For now, we simply state the equations so that we can use them. We will also present the control theoretic formulation of the vector Kalman filter. The derivation of the scalar form of the control theoretic Kalman filter, and its extension to the vector case, is left as an exercise for the reader.

Suppose we have a system that we represent by the model

1k k k k k x x wF G . (5-75)

Suppose further that the model for the system measurements is given by

1 1 1 1k k k k y x vH . (5-76)

In Equation 5-75 kw is the system input disturbance and is assumed to be a

white, zero-mean, random process with a covariance matrix of kQ . In

Equation 5-76, 1k v is the measurement noise and is assumed to be a white,

zero-mean, random process with a covariance matrix of 1k R . We further

assume that kw and 0x are uncorrelated for all k , 1k v and 0x are

uncorrelated for all k and kw and mv are uncorrelated for all k and m . We

let the covariance matrix of the initial state be 0P .2

Under the above conditions, the predicted state estimate at stage 1k ,

given the measurements up to stage k , is given by

ˆ ˆ1k k k k x xF (5-77)

and the smoothed state estimate at stage 1k , given the measurements up to

stage 1k , is given by

ˆ ˆ ˆ1 1 1 1 1 1k k k k k k k k x x y xK H (5-78)

where 1k K is the Kalman gain and is given by

1

1 1 1 1 1 1 1T Tk k k k k k k k k

K P H H P H R . (5-79)

2 Note that we are not assuming that the state is zero-mean. See the Appendix to this chapter

Page 25: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 67

In Equation 5-79, 1k kP is the covariance on the error between the

predicted state and the actual state and is defined by the equation

1 T Tk k k k k k k k P F P F G Q G (5-80)

where kP is the covariance on the error between the smoothed and actual

states at stage k . The covariance on the error between the smoothed and

actual states at stage 1k is defined by the equation

1 1 1 1k k k k k P I K H P (5-81)

where I is the identity matrix. To initialize the Kalman filter we let 00 P P

and ˆ 0 0Ex x .

Another formulation of the vector Kalman filter has the equations

ˆ ˆ ˆ1 1 1 1k k k k k k k k x x y xF K H F , (5-82)

where K k 1 is defined through Equations 5-79, 5-80 and 5-81.

The control theoretic form of the Kalman filter uses the state transition model of Equation 5-75 but uses a measurement model given by

k k k k y x vH . (5-83)

The resultant Kalman filter equations are

ˆ ˆ ˆ1k k k k k k k x x y xF K H , (5-84)

1

T Tk k k k k k k k

K F P H H P H R (5-85)

and

1 T Tk k k k k k k k k P F K H P F G Q G . (5-86)

The control theoretic form of the Kalman filter is initialized in the same manner

that the tracking form is initialized. Specifically we let 00 P P and

ˆ 0 0Ex x .

Block diagrams of the tracking control theoretic and formulations of the Kalman filter are shown in Figures 5-7 and 5-8. In these figures, the top part is the Kalman filter and the bottom part is the system (actually, the system model).

Page 26: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 68

Figure 5-7. Block Diagram of the Tracking Form of the Kalman Filter

Page 27: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 69

Now that we have the vector formulation of the Kalman filter we can apply it to a broader set of problems. Namely, those systems that we can represent by linear, time varying difference equations. While this encompasses a large set of problems it does not include those systems that are better represented by non-linear, time varying system equations and/or non-linear measurement equations. In the next chapter we extend our Kalman filter development to accommodate these types of systems. The resultant Kalman filter is commonly referred to as the Extended Kalman filter. It finds very wide application in target tracking problems as well as other areas.

5.6 Example 5-2 - Spring-Mass-Damper State Estimator

To illustrate the use of the vector Kalman filter we will consider the spring-mass-damper introduced in Chapter 2. From Equation 2-4 the state variable equation is given by

t t u t x Ax b (5-87)

where

Figure 5-8. Block Diagram of the Control Theoretic Form of the Kalman

Filter

Page 28: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 70

1

2

x tt

x t

x ;

1

2

x tt

x t

x ; 0 1

K BM M

A ; 1

0

M

b ; u t f t .

with 9K and 1B M . The initial state is 1 20 0 1 0T T

x x and

0f t . Recall that 1x t represents the position of the mass, in m, and 2x t

represents its velocity, in m/s. We assume that we can measure the position of the mass but that the measurement is corrupted by white, zero-mean, Gaussian noise with a variance of 0.01 m2. The Gaussian specification is included because it is easy to simulate. As in the previous example, the white and zero-mean restriction is imposed by the Kalman filter design.

Equation 5-87 represents the truth. To design a Kalman filter we need a

system and measurement model and a specification of the appropriate covariances and initial conditions. To begin, we need a discrete-time system

model. Since we have the A and b we can write the discrete-time model as

(see Equation 2-24)

1 wk k k F hx x (5-88)

where

2 32 3

2! 3!

t t te t A

F I A A A (5-89)

and

2 32 3

2! 3! 4!

T T TT

h I A A A b . (5-90)

We acknowledge that we have included a system disturbance, w k , and

represent the model state as a random process. Since the actual system is not

excited by noise, we use the system disturbance, and the associated Q k

matrix to “tune” the filter.

Since we know that we measure position and we know the measurement noise, we can write the measurement model as

y 1 1 v 1k k k Hx (5-91)

where 1 0H and 1 0.01R k .

We assume that we don’t know the initial position but that we are pretty sure that it has to be less than ±3 m. We also don’t know the initial velocity but we think it should be less than ±10 m/s. Given this we will use an initial state

estimate of ˆ 0 0 0T

x and that

2

2

3 00

0 10

P .

Page 29: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 71

Since we were given K , M and B , and we trust our information source,

we will assume that the system model is perfect and choose 0Q k .

The Matlab code used to implement this Kalman filter is included in the “Programs” folder as KMB_Kfilt.m. The results of running the simulation are shown in Figures 5-9 through 5-13. Figure 5-9 contains a plot of the actual position and velocity and the position and velocity estimate while Figure 5-10 contains a plot of the actual position and the measured position. In comparing these two figures it will be noted that the Kalman filter has done a good job of filtering out the measurement noise. This is further evidenced by Figure 5-11, which contains a plot of the errors between the actual position and the measured and estimated position. It will also be noted that the Kalman filter does a good job of estimating the velocity of the mass.

From Figure 5-9 it will be noted that the initial position and velocity estimates start at zero, as programmed. After the first measurement, the estimated position jumps up to the measured position. An examination of the

Kalman gains indicate that this is expected since the value of 1K , the gain

that is applied to the position measurement error and used to update the position, is 1.0. From our previous discussions, this means that the Kalman

filter is ignoring the previous state estimate, i.e. ˆ 0x , and basing the new

estimate solely on the measurement.

As time goes on, the Kalman gains approach zero. This means that the filter is ignoring the measurements and basing the estimates on the system model. In practical situations, this is not good because we don’t always have perfect knowledge of the system dynamics. A careful study of the Kalman filter equations, namely the covariance and Kalman gain equations, show that if

0Q k , the Kalman gains will always go to zero as k . Thus, a way to

avoid allowing the Kalman gains to go to zero is to use a non-zero value for

Q k . Of course, one of the major Kalman filter design issues is how to choose

a good Q k .

An examination of Figure 5-12 indicates that the position and velocity variances start out at their initial values of 32 and 102 and decrease to zero very quickly. This means that the Kalman filter thinks it is doing a good job of estimating the position and velocity, which it is. It must be emphasized that

the behavior of kP and kK depend upon the kF , kG , kH , kQ

and 1k R matrices, which are specified by the filter designer. This means

that kP will be a good indicator of filter performance only to the extent that

the kF , kG , kH , kQ and 1k R matrices are properly specified.

Page 30: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 72

Figure 5-9. Actual and Estimated Position and Velocity – KMB #1

Figure 5-10. Actual and Measured Position – KMB #1

Page 31: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 73

Figure 5-11. Position Errors – KMB #1

Page 32: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 74

Figure 5-12. Covariances – KMB #1

Page 33: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 75

Figures 5-14 through 5-18 show the results for the case where the designer has specified the wrong system model. In the actual system, the damping coefficient is 0.2 but in the model, the designer specified a damping coefficient of 1.0. It is clear Figures 5-14 and 5-16 that the filter is doing a poor job of estimating the position and velocity. In fact, the errors (Figure 5-16) between the actual and estimated position are generally much larger than the errors between the actual and measured position.

Although we know that the filter is not working well, the filter thinks it is doing well. This is evidenced by the fact that the covariances and Kalman gains are going to zero. In fact, a comparison of the covariance and Kalman gain plots for the two cases show that they are identical. This is expected since the

kF , kG , kH , kQ and 1k R matrices, which are specified by the

designer are the same in both cases.

The question now is how to make the filter work better. An obvious way is to improve the system model, and this is what must be done in many applications of the Kalman filter. Another way is to attempt to “tune” the filter

by adjusting Q k . This is left as a homework assignment for the reader.

Adjusting Q k is often the first thing to try in an attempt to improve the

performance of a Kalman filter. In some instances it is all that is needed. In other cases, as we will see, it is necessary to undertake the task of improving the system model.

Figure 5-13. Kalman Gains – KMB #1

Page 34: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 76

Figure 5-14. Actual and Estimated Position and Velocity – KMB #2

Figure 5-15. Actual and Measured Position – KMB #2

Page 35: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 77

Figure 5-16. Position Errors – KMB #2

Figure 5-17. Covariances – KMB #2

Page 36: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 78

In the examples presented thus far we have based our evaluation of the Kalman filter on one execution. In practical applications, it is usually a good idea to run the filter many times. That is, to perform many Monte Carlo tries. A qualitative evaluation of filter performance can be made by examining plots like those presented above. More quantitative performance information can be obtained by computing statistics on the errors between the estimated and “actual” states. The word actual was put in parentheses because the actual states are usually obtained from a detailed simulation of the system.

5.7 Example 5-3 – Humvee Tracking Problem

In this example, we consider the problem of tracking a Humvee as it travels across different terrains. The three Humvee trajectories we consider are shown in Figures 5-19 through 5-21. For the trajectory of Figure 5-19 the Humvee is travelling in a straight line, at a constant velocity, toward the origin

of the coordinate system. For the trajectory of Figure 5-20 the Humvee must avoid “potholes” and thus must execute what appear to be random maneuvers. Finally, for the trajectory of Figure 5-21 the Humvee is travelling through a city and must make right turns.

The Humvee has a GPS position sensor on board that can measure x and y position. The error on the x and y position measurements are independent, zero-mean and have a variance of 100 m2. They are governed by a Gaussian density function. We want to see if we can build a Kalman filter that provides

Figure 5-18. Kalman Gains – KMB #2

Page 37: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 79

position estimates that are better than those provided by direct use of the GPS measurements. We would also like to have an estimate of vehicle velocity.

Figure 5-19 – Humvee Trajectory 1

Figure 5-20 – Humvee Trajectory 2

Page 38: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 80

To develop the Kalman filter we need a system and measurement model. It should be obvious that we don’t have enough information to develop a system model of the types used in Examples 5-1 and 5-2. In lieu of this, we will use the Taylor series approach discussed in Section 2.3. The states we wish to estimate are the x and y position and velocity. If we write these using the Taylor series expansion as discussed in Section 2.3 we get

2

2

2

2

12!

12!

12!

12!

Tx k x k Tx k x k

Tx k x k Tx k x k

Ty k y k Ty k y k

Ty k y k Ty k y k

. (5-92)

In this expansion, we included only the position and velocity because these are the states of interest. If we were interested in estimating acceleration we would have also included Taylor expansions of acceleration.

In Equation 5-92 we are not interested in the terms that include second and higher order derivatives of the positions. Also, we don’t know what we would do with them if we included them. We choose to ignore them. However,

Figure 5-21 – Humvee Trajectory 3

Page 39: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 81

we will add system disturbances, wi k , as we did in Example 5-2. Inclusion

of the wi k will force inclusion of the kQ matrix in the Kalman filter

implementation. kQ can be used to tune the filter to attempt to account for

the fact that the system model is approximate.

With this, the system model becomes

1

2

3

4

x 1 x x w

x 1 x w

y 1 y y w

y 1 y w

k k T k k

k k k

k k T k k

k k k

. (5-93)

Note that the variables have been changed from deterministic processes to random processes. This is due to the fact that the Kalman filter development

requires that we consider the wi k to be, zero-mean and white random

processes.

We note that if we assume that the wi k are zero, not just zero mean,

then we make the assumption that the acceleration and higher order derivatives are zero. This means that the model carries the tacit assumption that the Humvee is traveling at a constant velocity in the x and y directions. This assumption will be good for the trajectory of Figure 5-19. In this case, the Kalman filter should work fairly well.

The assumption of constant velocity for the trajectories of Figures 5-20 and 5-21 is not good. For this reason, we expect that the filter may not work very well. The question will be whether we can make it work sufficiently well by

choosing an appropriate kQ matrix

We can write Equation 5-93 in state variable form as

1k k k F Gx x w (5-94)

where

x

x

y

y

k

kk

k

k

x ,

1 0 0

0 1 0 0

0 0 1

0 0 0 1

T

T

F , G I and

1

2

3

4

w

w

w

w

k

kk

k

k

w .

The measurement model is

1 1 1k k k Hy x v (5-95)

where

Page 40: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 82

1 0 0 0

0 0 1 0

H and

v 11

v 1

x

y

kk

k

v .

v 1x k and v 1y k are independent, white, zero-mean and Gaussian with

variances of 2 2 2100 mx y .

Implementation of the Kalman filter for this example is left as a homework problem. Matlab data files that contain the true target trajectories can be found in the “Programs” folder with the names humvee1.mat (Figure 5-19), humvee2.mat (Figure 5-20), and humvee3.mat (Figure 5-21). In these files, x and y denote the x and y positions, xd and yd denote the x and y velocities and xdd and ydd denote the x and y accelerations. The data are spaced 0.05 seconds apart. For those who do not use Matlab the data are also stored as text files as humvee1.txt, humvee2.txt and humvee3.txt. Each file contains six columns of data. The columns are, in order, x, y, xd, yd, xdd and ydd.

Page 41: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 83

5.8 Problems

1. Repeat the Kalman filter derivation of this chapter for the case where the

measurement index is k rather than 1k .

2. Derive Equation 5-7.

3. Show that Equation 5-22 is not true for k m and that Equation 5-23 is not

true for k m .

4. Given that x k satisfies the orthogonality condition, show that

ˆ ˆx x x 0E k k m m k .

5. Prove Equations 5-22 and 5-23.

6. Derive Equation 5-67.

7. Show that

2

1lim 1 1

P k kP k R k H

.

8. We are given a system whose state and actual output are described by the

equations 1 0.97a ax k x k , 0 1ax and 1 1a ay k x k . (Note that

the state and output are deterministic in this case.) Our model of this

system is given by x 1 x wk F k k , x 0 0 and

y 1 x 1 v 1k H k k . In this model we are using w k as a means

of allowing us to account for the fact that our model may not be correct. Also, we recognize that we can’t measure the true system output, only a version of it that is corrupted my measurement noise. We do know that the measurement noise is Gaussian, stationary, white and zero-mean, and has a variance of 0.01. We want to consider two problems.

a) 0.9F and 1H .

b) 0.97F and 0.8H .

For these two cases, find a value of Q k q that results in the best fit of

the state estimate, x k , to the actual state, ax k .

10. Develop a Kalman filter for the system of Figure 2-4 with 0.7a , 0.7b

and 0.2c . The actual measurement of the system output, y k , is

corrupted by stationary, white, zero-mean, Gaussian noise with a variance

of 10 0.9k

R k . The input to the system, u k , is stationary, white,

zero-mean, Gaussian noise with a variance of 1.0. Provide appropriate plots and discussions to describe the operation of your Kalman filter. Be sure your discussions reflect the Kalman filter properties discussed in this chapter.

11. Develop a Kalman filter for the system of Figure 2-4 with 0.9a , 0.9b

and 0.04c . The actual measurement of the system output, y k , is

corrupted by stationary, white, zero-mean, Gaussian noise with a variance

Page 42: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 84

of 10 0.9k

R k . In this case, the input is an unknown, constant value.

For purposes of simulation, select the constant value of the input from a uniform random number generator with a spread between -2 and 2. Since the input is not known it must be estimated as part of the Kalman filter. This can be accomplished by defining an appropriate model to represent the input and augmenting the system model to include the input model. Since the input is no longer a random process (it will actually become a state of

the augmented model), we want to use the Q k matrix as a tuning

parameter to affect the estimation properties of the filter. That is, we want

to try to choose Q k so that the Kalman filter does a good job of estimating

the two states of the original system. We don’t really care how well the Kalman filter estimates the unknown input.

12. Show that 1k P , as given in Equation 5-81, is a symmetric matrix.

13. Derive Equation 5-90. Hint: See Section 2.

14. Adjust Q k of modified KMB example (KMB #2) to make the filter work as

well as you can.

15. Implement the Kalman filter discussed in Example 5-3. To create the position measurements, add to the true position data random numbers that are independent, zero-mean, Gaussian and have a standard deviation of 10 m. The true position data is in the file discussed near the end of Example

5-3. It is suggested that you let kQ be a diagonal matrix. Start with

k Q 0 . This should work well for the trajectory of Figure 5-19. It will not

work well for the other two trajectories. See if you can find a kQ that will

make the Kalman filter work well for the other two trajectories. You may

need to use different kQ ’s for the different trajectories. The filter will be

deemed to work well if its estimates are more accurate (have smaller errors)

than the measurements. Use a sample period of 0.1 sT . This is half the

sample rate of the truth data stored in the files. You will need to down-sample the truth data to make it compatible with the desired sample period. Assume initial Humvee x and y positions of 450 m and 650 m, respectively. Assume initial x and y Humvee velocities of -10 m/s and -10 m/s, respectively. The negative velocities are consistent with the fact that the Humvee is headed toward the origin. The values are a guess. Be sure

to choose 0P to properly reflect how much faith you have in the initial

conditions.

Page 43: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 85

APPENDIX A – AN ALTERNATE DERIVATION OF THE SCALAR KALMAN FILTER

We want to present a derivation of the scalar Kalman filter that is different from the one presented in the text. The derivation in the text was an inductive type of proof. This proof is more direct. It is also somewhat shorter. A drawback is that one needs to start with the assumption that the Kalman filter is recursive. The inductive proof did not need to make this assumption, it followed from the derivation.

We start with a system model given by

x 1 x wk F k G k (1)

and a measurement model given by

y 1 x 1 v 1k H k k . (2)

w k is the system disturbance and is zero-mean and white. It has a variance

of Q k . In equation form this translates to

w 0E k k (3)

and

w wE k l Q k k l (4)

where k is the Kronecker delta function.

We assume that the initial state, x 0 , has a non-zero mean3 of x and

has a variance of 0P . In equation form

xx 0E (5)

and

2

x 0x 0E P . (6)

v k is the measurement noise and is zero-mean and white. It has a

variance of R k . In equation form this translates to

v 0E k k (7)

and

v vE k l R k k l . (8)

3 Note that we are removing the zero-mean assumption on the initial state that we imposed in the

derivation in the text.

Page 44: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 86

We also assume that w k , v l and x 0 are mutually uncorrelated. In

equation form this translates to

w x 0 0E k k 4, (9)

v x 0 0E l l (10)

and

w v 0 ,E k l k l . (11)

All of the above conditions on w k , v l and x 0 allow us to write

x w 0E k m m k , (12)

x v 0 ,E k m k m , (13)

y v 0E k m k m , (14)

x v 0 1E k m m k (15)

and

x w 0E k m m k . (16)

We proved (12) in Chapter 4. We will digress to prove (13). The proofs of the rest are left as a homework assignment. From Chapter 4 we have

1

1

0

x x 0 wk

k k l

l

k F F G l

. (17)

With this we can write we can write

1

1

0

x v x 0 v w vk

k k l

l

E k m F E m F GE l m

. (18)

The first expectation on the right is zero by (10) and all of the expectations in the sum are zero by (11). Thus the right side of (18) is zero for all k and m and (13) is established.

As indicated earlier, the proofs of (14) through (16) are left as a homework assignment. To prove (15) and (16) you can use (14) and the fact that

1

ˆ ˆx y 0k

l

l

k l bx

. (19)

4 We can use correlation instead of covariance because w k and v k are zero mean, even

though x 0 is not zero-mean.

Page 45: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 87

We added the last term because of the fact that x 0 is not zero-mean. In (19)

ˆ 0x is a number, not a random variable, thus the notation ˆ 0x instead of

x 0 . You showed for homework that

ˆ ˆx 0 0 x 0x E . (20)

We are now ready to formally state the problem needed to design a Kalman filter. Given the system and measurement models defined by (1) and

(2), and the aforementioned conditions on w k , v l and x 0 , find a linear,

recursive estimator

1 1ˆ ˆx 1 x y 1k kk a k b k , ˆ ˆx 0 x 0 0E x (21)

that minimizes the mean-squared error

2ˆ1 x 1 x 1P k E k k ,

2

0ˆ0 x 0 x 0 0P E P P . (22)

Our specific problem is to find the ak+1 and bk+1 that minimizes (22). To start we substitute (21) into (22) to yield

2

1 1ˆ1 x 1 x y 1k kP k E k a k b k . (23)

We note that (23) is a positive quadratic in ak+1 and bk+1. Thus, the minimum of (23) occurs at those values of ak+1 and bk+1 for which

1

10

k

P k

a

(24)

and

1

10

k

P k

b

. (25)

If substitute (23) into (24) and (25) we get

1 1

1

1ˆ ˆ0 2 x 1 x y 1 x

ˆ ˆ2 x 1 x 1 x

k k

k

P kE k a k b k k

a

E k k k

(26)

and

1 1

1

1ˆ0 2 x 1 x y 1 y 1

ˆ2 x 1 x 1 y 1

k k

k

P kE k a k b k k

b

E k k k

. (27)

The last terms in (26) and (27) were included because they will be needed to show that (32) (coming up) is zero.

Page 46: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 88

We next need to manipulate (26) and (27) to solve for ak+1 and bk+1 in terms of F, G, H, Q(k), R(k+1) and P(k). Expanding (26) yields

2

1 1ˆ ˆ ˆx 1 x x y 1 x 0k kE k k a E k b E k k (28)

Using (1), the first term becomes

ˆ ˆ ˆ ˆx 1 x x x w x x xE k k FE k k GE k k FE k k (29)

since ˆw x 0E k k by (16). Using (2), (15) and (29) in the third term of (28)

results in

ˆ ˆy 1 x x xE k k HFE k k . (30)

Substituting (29) and (30) into (28) yields

2

1 1ˆ ˆx x x 0k kF b HF E k k a E k , (31)

To reduce this further, we consider the expectation ˆ ˆx x xE k k k

. If we substitute for x k from(21), with an appropriate adjustment of indices,

we get

ˆ ˆ ˆ ˆx x x x x x 1

ˆx x y

k

k

E k k k a E k k k

b E k k k

. (32)

If we further assume that we have chosen ak and bk so that P(k) is minimized then, by (26) and (27) the two expectations on the right are zero. With this we have

ˆ ˆx x x 0E k k k (33)

or

2ˆ ˆx x xE k k E k . (34)

If we use (34) in (31) we get

1 1k ka F b HF . (35)

If we substitute (1), (2) and (35) into (27), perform considerable manipulations, and make use of previous results we get

2 2

1 2 2 21

1k

H F P k G Q kb K k

H F P k G Q k R k

. (36)

In the above we changed the notation from bk+1 to K(k+1) to denote the fact that this is termed the Kalman gain.

If we substitute 1 1kb K k and K(k+1) into (21) and perform some

simple manipulations we get the Kalman filter equation as

Page 47: 5.0 SCALAR, DISCRETE TIME KALMAN FILTER 5.2 Filter … · present the matrix formulation of the Kalman filter and consider another example of how to implement the matrix form of the

© M. Budge, Jr – Jan 2018 89

ˆ ˆ ˆx 1 x 1 y 1 xk F k K k k HF k (37)

where the Kalman gain, K(k+1) is given by (36).

To complete the development we need to develop and equation for computing P(k). From (22) we have

2ˆ1 x 1 x 1

ˆ ˆ ˆx 1 x 1 x 1 x 1 x 1 x 1

P k E k k

E k k k E k k k

. (38)

Since we have chosen ak+1 and bk+1 to minimize 1P k the last expectation is

zero (see the discussion associated with (32) and (33)). Thus,

ˆ1 x 1 x 1 x 1P k E k k k . (39)

If we make use of (1) and (37), perform considerable manipulations, and use previous results we obtain

2 21 1 1P k K k H F P k G Q k , 00P P (40)

Equations (36), (37) and (40) constitute the Kalman filter equation. Equation (36) is the Kalman gain equation, (37) is the state update equation and (40) is the covariance (variance) update equation.