copyright 2004 aaron lanterman variations on the kalman filter and multitarget tracking issues prof....
TRANSCRIPT
Copyright 2004 Aaron Lanterman
Variations on the Kalman FilterVariations on the Kalman Filterand Multitarget Tracking Issuesand Multitarget Tracking IssuesVariations on the Kalman FilterVariations on the Kalman Filterand Multitarget Tracking Issuesand Multitarget Tracking Issues
Prof. Aaron D. Lanterman
School of Electrical & Computer EngineeringGeorgia Institute of Technology
AL: 404-385-2548<[email protected]>
ECE 7251: Spring 2004Lecture 18
2/18/04
Copyright 2004 Aaron Lanterman
The Setup for the Extended KFThe Setup for the Extended KFThe Setup for the Extended KFThe Setup for the Extended KF
State equation
Measurement eqn.
Process “noise” covariance
Measurement noise covariance
Initial guess, before taking any data
Covariance indicating confidence of initial guess
are uncorrelated with each otherand for different k
]1[]1[]1[1 )(
mk
mk
mk Uf
]1[]1[]1[ )( nk
mk
nk WhY
Uk N(0,KU )
Wk N(0,KW )
1 N( ˆ 1|0,P1|0)
Uk , Wk , 1
Copyright 2004 Aaron Lanterman
The Extended Kalman FilterThe Extended Kalman FilterThe Extended Kalman FilterThe Extended Kalman Filter
• In the state update, go ahead and use the nonlinear models for predicting the state and data
• Use a local linearization of f and h in computing the covariance update and the Kalman gain:
ˆ k1 f ( ˆ k ) Lk1(yk1 h( f ( ˆ k ))
Lk1 Pk1|kCT ( ˆ k )(C( ˆ k )Pk1|kC
T ( ˆ k ) KW ) 1
Pk1|k A( ˆ k )Pk|kAT ( ˆ k ) KU
Pk1|k1 Pk1|k Pk1|kCT ( ˆ k )
(C( ˆ k )Pk1|kCT ( ˆ k ) KW ) 1C( ˆ k )Pk1|k
Copyright 2004 Aaron Lanterman
Linearizing the Dynamics ModelLinearizing the Dynamics ModelLinearizing the Dynamics ModelLinearizing the Dynamics Model
• Slope of tangent plane for the dynamics:
A( ˆ )
f1()1
f1()2
f1()m
f2()
1
f2()
2
f2()
m
fm ()1
fm ()2
fm ()m
ˆ
Copyright 2004 Aaron Lanterman
Linearizing the MeasurementsLinearizing the MeasurementsLinearizing the MeasurementsLinearizing the Measurements
• Slope of tangent plane for the measurements:
ˆ21
2
2
2
1
2
1
2
1
1
1
)()()(
)()()(
)()()(
)ˆ(
m
nnn
m
m
hhh
hhh
hhh
C
Copyright 2004 Aaron Lanterman
Using Polar Radar Measurements in EKFUsing Polar Radar Measurements in EKFUsing Polar Radar Measurements in EKFUsing Polar Radar Measurements in EKF
• Nonlinear measurement mapping:
00
00
)(
2222
2222
yx
x
yx
yyx
y
yx
x
C
),arctan(
)(22
xy
yxh
• Local linearization for EKF (constant velocity model):
arctan(y,x) arctan(y / x)(Checking to make sure answer is in correct quadrant)
Copyright 2004 Aaron Lanterman
Problems with the EKFProblems with the EKFProblems with the EKFProblems with the EKF
• Even if process noise and measurement noise are Gaussian, since there are nonlinear transformations, the true “Bayesian posterior” density is not Gaussian
• No longer optimal in the linear MMSE sense (a nice property the original Kalman filter had)
• Only as good as the linearization; if the estimate gets too far off, the linearization is not good, and we get EKF divergence which is horrible, bad, unpleasant, terrible, and devastating.
• Covariance estimates are often overoptimistic
Copyright 2004 Aaron Lanterman
Combining Different State ModelsCombining Different State ModelsCombining Different State ModelsCombining Different State Models
• A constant velocity model– Tracks well for straight paths– But has trouble catching up with target maneuvers
• A constant acceration or single model– Handles maneuvers better– But may give wobbly tracks when target is really flying
straight• Idea: try to use several different models
– Switch between models, say with different covariance matrices or different orders, by some criteria
– Run several Kalman filters in parallel, and let them interact
Copyright 2004 Aaron Lanterman
Interacting Multiple ModelInteracting Multiple ModelInteracting Multiple ModelInteracting Multiple Model
• Assume transition probabilities of target switching from one model to another are known
• Run Kalman filters for the different models in parallel (some may be extended, some not)
• Output of different Kalman filters are mixed based on estimates of model probabilities before being fed back into the Kalman recursion
• Estimates of model probabilities continuously updated
Copyright 2004 Aaron Lanterman
Multitarget Multisensor TrackingMultitarget Multisensor TrackingMultitarget Multisensor TrackingMultitarget Multisensor Tracking
• Problem: Don’t know which measurements go with which targets
• Can have false alarms and missing detections• Targets enter and leave scene at unknown times• Optimal solution has O(M!) type complexity
– Association problem is “NP-hard”, meaning it’s as hard as many well known problems (such as the Traveling Salesman) which have no O(M2) solution, or O(M5) solution, or O(M100) solution…
• Suboptimal solution must be used in practice
Copyright 2004 Aaron Lanterman
MTMS Ex. 1: Angle-Only MeasurementsMTMS Ex. 1: Angle-Only MeasurementsMTMS Ex. 1: Angle-Only MeasurementsMTMS Ex. 1: Angle-Only Measurements
• Electronic Support Measure (ESF) and Infrared Search and Track (IRST) sensors provide angle information, but no range
• Can use triangulation to initialize estimates and multisensor multitarget algorithms to track
• Ghost targets are a difficult problem
Copyright 2004 Aaron Lanterman
Ghosts in Triangulation SystemsGhosts in Triangulation SystemsGhosts in Triangulation SystemsGhosts in Triangulation Systems
• Ex: Two targets with two sensors yields “ghosting ambiguity
• Ideally, could solve by adding a third sensor– Only declare a target
where 3 beams intersect
Copyright 2004 Aaron Lanterman
Ghosts Due to Measurement ErrorsGhosts Due to Measurement ErrorsGhosts Due to Measurement ErrorsGhosts Due to Measurement Errors
• In reality– Angle measurements will
be subject to errors– May be missing
measurements
• Makes locating and tracking much harder
Copyright 2004 Aaron Lanterman
MTMS Ex. 2: Passive Coherent LocationMTMS Ex. 2: Passive Coherent LocationMTMS Ex. 2: Passive Coherent LocationMTMS Ex. 2: Passive Coherent Location
• Can track using commerical FM radio or television stations as the illuminator
• Examples– Lockheed Martin’s Silent Sentry– Demonstrator systems built by DERA and NATO
• Low frequency, low bandwidth, continuous wave– Poor angle and range measurement– Excellent Doppler management
• Can use multilateration to initialize estimates• Each transmitter-receiver pair can be thought of as a
sensor
Copyright 2004 Aaron Lanterman
Ghosts in Multilateration SystemsGhosts in Multilateration SystemsGhosts in Multilateration SystemsGhosts in Multilateration Systems
• A range measurement by transmitter-receiver pair in a bistatic system gives the target on an ellipse
• Subject to similar ghosting problems as in triangulation systems
Copyright 2004 Aaron Lanterman
Approaches to Multitarget Tracking (1)Approaches to Multitarget Tracking (1)Approaches to Multitarget Tracking (1)Approaches to Multitarget Tracking (1)
• Almost all algorithms use an initial “gating” stage, throwing out widely unlikely associations– Define an ellipsoid around the predicted state estimate based on the
prediction covariance– Throw out associations with measurements that fall outside this
ellipsoid• Ad-hoc association algorithms
– If targets are few and spaced far apart, almost anything reasonable will work!
• Symmetric Measurement Equations– Clever approach: make a new set of “pseudomeasurements” which
does not depend on the associations!– Problem becomes highly nonlinear; ordinary EKF has trouble handling
it– No easy way to handle track initialization– Seem to be of solely theoretic interest at present
Copyright 2004 Aaron Lanterman
Approaches to Multitarget Tracking (2)Approaches to Multitarget Tracking (2)Approaches to Multitarget Tracking (2)Approaches to Multitarget Tracking (2)
• Joint Probabilistic Data Association– Handles association on a scan-by-scan basis– Makes “soft” instead of “hard” decisions; several
measurements in a single scan may contribute to the state estimate of a particular target
– Good combination of ease of implementation and high performance
• Multiple Hypothesis Testing – Makes hard associations based on associating
several scans into the past– Enumerates hypotheses as branching trees; to
prevent computation from getting too much, prunes the less likely ones
– Difficult to program; data structures are complicated
Copyright 2004 Aaron Lanterman
Appendix: Constrained Optimization ApproachAppendix: Constrained Optimization ApproachAppendix: Constrained Optimization ApproachAppendix: Constrained Optimization Approach
• Cast the multisensor-multitarget problem in terms of a constrained optimization problem
• Advantage: Lots of work being done on related problems by the operations research community
• Find a good (but not necessarily the best) solution with “Lagrangian relaxation” techniques
• Two main groups pushing this approach:– Univ. of Connecticut (Deb, Pattipati, Bar-Shalom, etc.)– Colorado State. Univ./Numerica, Inc. (Aubrey Poore)
• Ben Slocumb, a colleague at Numerica (formerly at GTRI), tells me that this is the Way To Go for big problems
Copyright 2004 Aaron Lanterman
Constrained Optimization NotationConstrained Optimization NotationConstrained Optimization NotationConstrained Optimization Notation
zi1in {0,1}
• Suppose processing n past scans from a single sensor or current scans from n different sensors
• indicates number of measurements made on scan k• Assignment variables
– From left to right, positions in the subscript indicate scans; number in that position indicates measurement index within that scan
• Example:– Means obervation 3 on scan 1 and observation 2 on scan 3 belong to the same source– A zero indicates a missed to detection, so here the source was missed on scan 2
z302 1
Mk, k 1...n
Source: Blackman & Popoli, Sec. 7.3.1
Copyright 2004 Aaron Lanterman
Constrained Optimization Notation Con’tConstrained Optimization Notation Con’tConstrained Optimization Notation Con’tConstrained Optimization Notation Con’t
c i1in• Calculate “costs”
– Negative loglikelihood– Usually get from the means and covariances from a (possibly extended) Kalman filter
• Want to minimize the total cost
– Subject to the constraint that each measurement belongs to at most one track
1
1
110 0
M
i
M
iiiii
n
n
nnzc
Copyright 2004 Aaron Lanterman
Constrained Optimization FormulationConstrained Optimization FormulationConstrained Optimization FormulationConstrained Optimization Formulation
1
1
110 0
M
i
M
iiiii
n
n
nnzc Minimize
zi1inin0
M n
i2 0
M 2
1, i1 1,M1,
zi1inin0
M n
ik1 0
M k1
ik 1 0
M k 1
i2 0
M 1
1, for ik 1,Mk ,for k 2,n 1,
zi1inin 1 0
M n 1
i1 0
M 1
1, in 1,Mn ,
zi1in {0,1}
Copyright 2004 Aaron Lanterman
What Was That Last Slide Saying???What Was That Last Slide Saying???What Was That Last Slide Saying???What Was That Last Slide Saying???
z100 z101 z102 z103 z110 z111 z123 1
• Suppose M1=M3=3 and M2=2– Means we received 3 measurements on scan 1 and 3, but only 2
measurements on scan 2• Constraint equation for the first measurement on the first scan is
• That just says that any valid association must take that first measurement on the first scan into account, and that it can’t be claimed by more than one track
Source: Blackman & Popouli, pp. 408-409
Copyright 2004 Aaron Lanterman
Relaxed ProblemRelaxed ProblemRelaxed ProblemRelaxed Problem
1
1
110 0
M
i
M
iiiii
n
n
nnzc Minimize
zi1inin0
M n
i3 0
M 3
i1 0
M 1
1, i2 1,M1,
zi1in {0,1}
uikk zi1in
in0
M n
1ik1 0
M k1
ik 1 0
M k 1
i2 0
M 1
ik3
M k
k3
n
zi1inin0
M n
i2 0
M 2
1, i1 1,M1,Subject to
Lagrange Multipliers
This problem can be solved in O(n^3) time
Copyright 2004 Aaron Lanterman
Interpretation of Lagrange MultipliersInterpretation of Lagrange MultipliersInterpretation of Lagrange MultipliersInterpretation of Lagrange Multipliers
uikk• The Lagrange multiplier is responsible for punishing misuse of measurement ik from the kth scan
• Example interpretation:– penalizes using measurement #2 from scan 4 more than once – penalizes not using #2 from scan 4 at all
• Must iteratively refine the Lagrange multipliers to satisfy the constraints– Extraordinarily complicated– Accounts for main differences between algorithms
042 u
042 u
Copyright 2004 Aaron Lanterman
The Relaxation AlgorithmThe Relaxation AlgorithmThe Relaxation AlgorithmThe Relaxation Algorithm1. Initialize Lagrange Multipliers to zero2. Solve relaxed problem
a. Compute cost (without Lagrange multiplier part) of solution to relaxed problem to get a “lower cost bound”3. Enforce constraints on relaxed solution (can use a suboptimal algorithm to do this)
a. Compute cost of feasible solution to get an “upper cost bound”4. If the difference between the upper and lower cost bounds in sufficiently small,
a. say solution is good enough and stopb. Otherwise, adjust Lagrange multipliers and go back to Step 2
Copyright 2004 Aaron Lanterman
Papers by the Colorado GroupPapers by the Colorado GroupPapers by the Colorado GroupPapers by the Colorado Group
• A.B. Poore and N. Rijavec, “A Lagrangian Relaxation Algorithm for Multidimensional Assignment Problems Arising from Multitarget Tracking,” SIAM J. of Optimization, vol. 3, no. 3, pp. 544-563, August 1993.– Relax an N dimensional problem to an N-1 dimensional problem– Proceed recursively
• ABP and A.J. Robertson, “A New Lagrangian Relaxation Based Algorithm for a Class of Multidimensional Assignment Problems,” Computational Optimization and Applications, vol. 8, pp. 129-150, 1997.– Claims substantial improvement over 1993 algorithm– Relax N-2 constraints at once to make a 2 dimensional problem– Iterate to improve solution
Copyright 2004 Aaron Lanterman
Papers by the Connecticut GroupPapers by the Connecticut GroupPapers by the Connecticut GroupPapers by the Connecticut Group
• T. Kirubarajan, H. Wang, YBS, K.R. Pattipati, “Efficient Multisensor Fusion Using Multidimensional Data Association,” IEEE Trans. Aerospace and Electronic Systems, vol. 38, no. 2, pp. 386-398, April 2001.
• R.L. Popp, K.R. Pattipati, and YBS, “m-Best S-D Assignment Algorithm with Application to Multitarget Tracking,” IEEE Trans. AES, vol. 37, no. 1, pp. 22-39, January 2001.
• S. Deb, M. Yeddanapudi, K.R. Pattipati, and YBS, “A Generalized S-dimensional Assignment for Multisensor-Multitarget State Estimation,” IEEE Trans. AES, vol. 33, no. 2, pp. 523-538, April 1997.
• K.R. Pattipati, Deb S., YBS, and R.B. Washburn, “A New Relaxation Algorithm and Passive Sensor Data Association,” IEEE Trans. On Automatic Control, vol. 37, no. 2, pp. 198-213, February 1992.