classification and tracking of moving ground vehicles

34
VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 275 Classification and Tracking of Moving Ground Vehicles Duy H. Nguyen, John H. Kay, Bradley J. Orchard, and Robert H. Whiting Airborne radars that track moving targets face the challenge of surveillance over large geographic areas where military vehicles are interspersed with civilian traffic. There is a major need to develop robust, efficient, and reliable identification and tracking techniques to identify selected targets, and to maintain tracks for selected critical targets, in dense target environments. Traditionally, tracking and identification have been considered separately; we identify a target first, and then we track it kinematically to sustain the identification. The difficulty with separate identification and tracking is that neither task is sufficient by itself to satisfy the demands of the other. We need to incorporate the distinctive target signature information into the tracker, so that identification and tracking, or signature comparison and tracking, perform together as a unit. The moving-target identification and feature-aided tracking approach described in this article combines kinematic association hypotheses with accumulated target classification information obtained from high range resolution (HRR), inverse synthetic aperture radar (ISAR), and synthetic aperture radar (SAR) signatures, to obtain improved classification and association. We describe the basics of moving-target classification and signature comparison using HRR profiles. In addition, we show improvements in HRR- based target identification using superresolution and Bayesian classification techniques. We describe how this feature information may be incorporated into kinematic tracking. Finally, we discuss methods to improve the robustness of the HRR techniques with the addition of available SAR and ISAR signatures. C moving targets are difficult problems for any surveillance radar. The classification problem and the tracking problem both complement and exacerbate each other. For stationary vehicles, we can classify targets on the basis of their high-resolution synthetic aperture radar (SAR) imagery, which forms a fine-resolution two-di- mensional image of the vehicle that can be compared to a previously stored template. For moving targets, however, such two-dimensional images cannot be re- liably generated because of unknown vehicle motion. The primary signature that can be reliably obtained for moving targets is the high range resolution (HRR) profile. For moving targets, classification information can be obtained by comparing an HRR profile with profiles collected from similar vehicles. However, an HRR profile contains less information about a target, compared to a two-dimensional SAR image, so reli- able classification using HRR profiles relies on many looks from different aspect angles. While the classification of moving targets relies on evidence accrued from many independent looks, such accrual can take place only if the same vehicle remains under track. Under difficult tracking situations, in- cluding low target speeds, unfavorable geometries, and dense traffic, kinematic trackers can make mis- takes and associate target reports to the wrong tracks. Thus, while effective target classification could be ac-

Upload: others

Post on 03-Jun-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 275

Classification and Tracking ofMoving Ground VehiclesDuy H. Nguyen, John H. Kay, Bradley J. Orchard, and Robert H. Whiting

■ Airborne radars that track moving targets face the challenge of surveillanceover large geographic areas where military vehicles are interspersed with civiliantraffic. There is a major need to develop robust, efficient, and reliableidentification and tracking techniques to identify selected targets, and tomaintain tracks for selected critical targets, in dense target environments.Traditionally, tracking and identification have been considered separately;we identify a target first, and then we track it kinematically to sustain theidentification. The difficulty with separate identification and tracking is thatneither task is sufficient by itself to satisfy the demands of the other. We need toincorporate the distinctive target signature information into the tracker, so thatidentification and tracking, or signature comparison and tracking, performtogether as a unit. The moving-target identification and feature-aided trackingapproach described in this article combines kinematic association hypotheseswith accumulated target classification information obtained from high rangeresolution (HRR), inverse synthetic aperture radar (ISAR), and syntheticaperture radar (SAR) signatures, to obtain improved classification andassociation. We describe the basics of moving-target classification and signaturecomparison using HRR profiles. In addition, we show improvements in HRR-based target identification using superresolution and Bayesian classificationtechniques. We describe how this feature information may be incorporated intokinematic tracking. Finally, we discuss methods to improve the robustness of theHRR techniques with the addition of available SAR and ISAR signatures.

C moving targets aredifficult problems for any surveillance radar.The classification problem and the tracking

problem both complement and exacerbate each other.For stationary vehicles, we can classify targets on thebasis of their high-resolution synthetic aperture radar(SAR) imagery, which forms a fine-resolution two-di-mensional image of the vehicle that can be comparedto a previously stored template. For moving targets,however, such two-dimensional images cannot be re-liably generated because of unknown vehicle motion.The primary signature that can be reliably obtainedfor moving targets is the high range resolution (HRR)profile. For moving targets, classification information

can be obtained by comparing an HRR profile withprofiles collected from similar vehicles. However, anHRR profile contains less information about a target,compared to a two-dimensional SAR image, so reli-able classification using HRR profiles relies on manylooks from different aspect angles.

While the classification of moving targets relies onevidence accrued from many independent looks, suchaccrual can take place only if the same vehicle remainsunder track. Under difficult tracking situations, in-cluding low target speeds, unfavorable geometries,and dense traffic, kinematic trackers can make mis-takes and associate target reports to the wrong tracks.Thus, while effective target classification could be ac-

Page 2: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

276 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

complished with multiple-look evidence accrual, fail-ures in track maintenance can prevent accurate targetclassification.

The target classification approach described in thisarticle involves the integration of the target classifica-tion capability into a kinematic tracker, merged insuch a way that the two capabilities complement andsupport each other. In particular, while evidence ac-crual for HRR-based classification relies on accuratereport-to-track association, we have integrated the ac-cumulating classification information, along with tar-get signature information, into the association logicof the tracker. To improve the algorithm performancewe have also explored the use of additional informa-tion that may be obtained by the radar on a sporadicbasis. These sporadic signatures include SAR imagery,which may be obtained if a target stops while undertrack (a situation notorious for breaking moving-tar-get indicator [MTI]–based tracks with a typical radarminimum detectable velocity of a couple of metersper second), and inverse SAR (ISAR) imagery, whichmay be obtained if a target turns while under track.Combined with the readily available HRR signature,these two-dimensional target signatures enable thetarget classifier and feature-aided tracker to performrobustly under a wide variety of circumstances.

The present work on identification and feature-aided tracking builds on significant previous develop-ments at Lincoln Laboratory. The use of HRR signa-tures for target classification, which forms thecornerstone of moving-target classification, was origi-nally demonstrated by M.L. Mirkin [1]. His resultsshowed that HRR automatic target recognition(ATR) performance benefits from a number of inde-pendent looks at the targets, as well as fine resolutionand extended dwell time (i.e., multiple looks forspeckle reduction at each observation aspect angle).This work pointed out the reliance of HRR-basedclassification on effective tracking, since multiplelooks on the same target will accumulate evidenceonly as long as all reports correctly associate with thetrack.

While Mirkin’s work focused on HRR profiles formoving targets, L.M. Novak et al. performed classifi-cation using SAR imagery [2]. Their work on the De-fense Advanced Research Project Agency (DARPA)–

sponsored Semi-Automated Image Intelligence Pro-cessing (SAIP) program demonstrated the successfulapplication of a superresolution method called high-definition vector imaging (HDVI) [3], developed byG.R. Benitz, to SAR images. Their results indicatedthat a two-dimensional ATR system using HDVI-processed SAR images will outperform those systemswhose SAR images were processed with standardnon-data-adaptive processing methods (i.e., theweighted fast Fourier transform, or FFT).

Successful application of template-based ATRmethods to ISAR imagery was later demonstrated byR.L. Levin [4], who used SAR images as templates.His results, in addition to demonstrating that ISARimages could be successfully used for moving-targetATR, also showed the benefits of the two-dimen-sional ISAR signature over the one-dimensional HRRsignature, as well as the benefit of accumulating clas-sification information over several looks, as a targetexecutes a turn.

On the basis of the success of HDVI-processedSAR images in two-dimensional ATR systems, D.H.Nguyen et al. extended the application of HDVI toHRR profiles and demonstrated that the modifiedone-dimensional HRR ATR system (based on thesystem described in Reference 1) using these HDVI-processed HRR profiles provided significantly im-proved target recognition performance compared toother ATR algorithms using HRR profiles obtainedfrom conventional image processing techniques, e.g.,weighted FFT [5–7]. However, these HRR profileswere formed from SAR images of targets taken fromthe high-quality Moving and Stationary Target Ac-quisition and Recognition (MSTAR) data set havinga high signal-to-noise ratio (SNR) of approximately35 dB. From a radar resource perspective, however,for an HRR ATR system to be operationally useful,the system must operate at a significantly lower SNR,in the range of 20 to 25 dB. Unfortunately, the dem-onstrated performance of the HRR ATR classifierssuffered significant losses as the SNR decreased from35 to 20 dB. In particular, the improved performanceobtained with HDVI over conventional processingseen at the high SNR was significantly reduced andessentially eliminated at the lower SNR.

Recently, Nguyen et al. demonstrated the applica-

Page 3: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 277

tion of Benitz’s new superresolution technique calledbeamspace high-definition imaging (BHDI) to HRRprofiles and showed that the enhanced one-dimen-sional ATR system using BHDI-processed HRR pro-files exhibits significantly improved target-recogni-tion performance at a low SNR compared to theconventional FFT method and the HDVI HRRmethod [8]. With the HRR profiles operating at areasonable SNR, the radar resource requirements aresufficiently modest for frequent HRR interleave withMTI for effective feature-aided tracking.

This article is organized as follows: we first describehow the range profile, which is the only distinctiveradar signature for moving vehicles that does not de-pend on the vehicle performing certain maneuvers,can be used to identify the vehicle class. In this con-text we introduce the concepts of improving the sig-nature by using adaptive signal processing techniques(HDVI and BHDI). We illustrate how to use the sig-natures in conjunction with a template library to ob-tain classification via a Bayesian classifier. Once iden-tification is established, we show how to use thevehicle classification so obtained, along with the radarHRR signature, to improve on track/report associa-tion, the feature-aided tracking component proper.Finally, we show how to extend this framework to in-clude distinctive two-dimensional signatures, includ-ing SAR (stopping) and ISAR (turning) target signa-tures. These two-dimensional signatures performboth the functions of improving classification, bycomparison with a template library, and they can alsobe used for on-the-fly template training in the ab-sence of a pre-stored template library.

Moving-Target Classification

The main goal of any ATR algorithm is to correctlyidentify an unknown target from its remotely sensedsignature. Given a sensed signature from an unknowntarget, many ATR systems work by matching thegiven signature against a set of candidate target hypo-thetical possibilities that could give rise to this signa-ture. The technique we have used for ATR is to com-pare a radar signature with a template set from knownvehicles.

With a high-resolution radar, there are two signa-tures that are readily available from moving vehicles,

the high-resolution range profile, or HRR, and an in-verse SAR image, or ISAR. The HRR signal is a one-dimensional measure of the radar cross section alongthe range dimension of a vehicle. The ISAR imagetakes advantage of the vehicle turning to create a two-dimensional plan view of the vehicle. Other returnsfrom moving parts such as lug nuts have been ex-plored, but they are not considered in the presentwork for ATR purposes. If a vehicle stops, then it isalso possible to obtain a SAR plan view, which canalso be used for identification.

Since moving vehicles most often travel in rela-tively straight lines, it is usually impractical to obtainany vehicle signatures from a radar other than simpleHRR profiles. The general approach to classificationis to compare an HRR signature with a pre-stored setfrom known classes, and estimate the likelihood thateach measurement corresponds to a particular vehicleclass by using a matching algorithm. The better thematch, the more likely the vehicle corresponds to aparticular class. As the radar views the vehicle in sub-sequent scans, the classification evidence is accumu-lated for greater confidence.

The HRR ATR process proceeds over severalstages. The basic profile is most directly formed by il-luminating a target with a high-resolution waveform,for a series of coherent pulses (a coherent processinginterval, or CPI). The target signature typically can becontained in a single Doppler bin for relatively shortpulse streams by Fourier-transforming the pulsestream, and correcting for target range rate. Depend-ing on the radar parameters, clutter cancellation maybe necessary to better separate the target signaturefrom the stationary ground clutter. Subsequent CPIs,collected adjacent in time, and perhaps with a centerfrequency offset, can be noncoherently combined toaverage out the speckle that arises from phase interfer-ence of different scattering centers on the vehicle. Themethodology used for forming a range profile from aseries of CPIs involves nominally averaging the powerreturns for the Doppler bins containing the target.We demonstrate an improved methodology for thisprofile formation, using superresolution methods.

Once a range profile has been formed from a radardetection, the process of classification involvesmatching it against profiles taken from a template da-

Page 4: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

278 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

tabase of known vehicle types. A weighted meansquare error (MSE) metric is used to measure the dif-ferences between the input test profile and the tem-plate profiles. A search over unknown parameters,such as precise target aspect angle, location, and radarcross section, is performed, and the smallest MSE ischosen to represent the fit between the test profileand the class being considered. For a multiclass classi-fier, we measure the MSE between the test profile andtemplate from each class under consideration. Thisset of measurements—the MSE match between agiven profile and each classification target set—is re-ferred to as a feature vector.

Given a feature vector for a particular profile, it isnecessary to infer actual classification. First the classi-fier is trained with data from known vehicles to ob-tain statistics of MSE distributions for both in-classvehicles and out-of-class vehicles. These training sta-tistics are used to generate a likelihood that each mea-surement would come from a given class. Startingfrom a fixed a priori probability, successive likeli-hoods for each vehicle are combined with a Bayesianclassifier to generate a posterior probability vector,which is presented to an operator.

Radar Resources and System Consideration

Moving-target ATR and tracking is a resource-con-strained problem, and we cannot enter into a discus-sion of the topic without first delineating the primaryissues involved. A radar is an active sensor, with a fi-nite power and aperture. For a surveillance missionparticularly, we must be extremely parsimonious withthe radar resources.

The key radar resources to be minimized are theSNR, the number of pulses dedicated to a particulartask, and the resolution of these pulses. Dedicatingmore pulses leads to a higher net SNR, but at the ex-pense of less rapid revisits. Operating at very highresolution may require higher SNR, and possibly re-quire additional pulses to cover a wider bandwidth.Moving-target classification generally functions inconjunction with an MTI mode, which operates byusing a coherent pulse stream yielding a net SNR onthe order of 15 dB. If an ATR system were to require35 dB, or even 25-dB SNR to function effectively,and required that level of SNR for the entire coverage

area, the ATR system would need so much of the ra-dar energy as to drastically reduce the MTI coveragearea or drastically reduce the revisit frequency.

Superresolution Methods

To improve the features of the HRR profiles, givenlimited radar resources, we consider superresolutionapproaches. Unlike conventional Fourier imaging,which uses a predetermined set of weighting coeffi-cients for each pixel in the output image, super-resolution techniques are data-adaptive image-forma-tion methods that select an optimal set of weightingcoefficients for each pixel.

Superresolution methods have been used exten-sively for SAR ATR applications. G.J. Owirka et al.[9] used a two-dimensional template-based ATR met-ric to compare SAR images formed from several spec-tral estimation techniques and found that, of thesemethods, Benitz’s HDVI [3] provided the best perfor-mance. These methods, including HDVI, consist ofD.H. Johnson’s eigenvector-based method [10], H.C.Stankwitz et al.’s spatially variant apodization (SVA)method [11], Stankwitz and M.R. Kosek’s super-resolved SVA (SSVA) method [12], and J. Capon’smaximum likelihood method (MLM) [13].

Recently, Benitz proposed a modification toHDVI, which would reduce the computational costsfrom 8000 operations per input pixel for a SAR imageby a factor of six. This new method, called beamspacehigh-definition imaging (BHDI) [14], providesspeckle noise suppression that is comparable to itsHDVI predecessor but improves on HDVI’s abilityto preserve target edges. Figure 1 compares a BHDI-processed SAR image of a Scud transporter erectorlauncher (TEL) against the image of the same ScudTEL formed by using a Taylor-weighted FFT (base-line FFT). The inherent ability of BHDI to controlthe main lobe and the sidelobes results in significantreduction in speckle noise compared to the weightedFFT.

BHDI, as applied to HRR profile estimation, in-volves an estimation of the range profile, given a set ofnoncoherent complex range cuts. For additionaldetail on this estimation process see the appendix en-titled “Beamspace High-Definition Imaging Super-resolution Method.” Unlike the original SAR applica-

Page 5: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 279

tion, the covariance estimation is aided by indepen-dent profiles from adjacent CPIs, so that the rank de-ficiency in the covariance matrix, which necessitatesextensive constraints in the SAR estimation problem,is greatly alleviated.

Mean-Square-Error Matching Metric

The principles behind template-based ATR involvemeasuring the similarity between a test profile andwhat is expected from a given hypothesized target.The degree of similarity is used to determine the like-lihood that the test profile originated from the hy-pothesized target class. For the ATR effort describedhere, we have employed a weighted MSE metric be-tween the profile under test and the template, or ex-pected profile, for a given hypothesized target.

Constructing a weighted MSE involves assuming amathematical model of the profile under test. Sinceeach range sample on an HRR range profile com-prises a complex sum over all scatterers across the tar-get with the same range, a single-CPI complex rangeprofile is generally modeled as a zero-mean Gaussianrandom variate, with variance equal to the mean crosssection as found in the template. Noncoherentlycombining several profiles results in a χ2 variate. Withsuch a model, an appropriate matching metric for a

profile to a template is an MSE metric between theprofile and template, both expressed in decibels (dB).Several extra variables must also be included in thematch, namely, matching of the receiver noise floor, asearch over target range and radar cross section (thelatter assuming that radar-cross-section calibration isnot well known), and finally a search over templateaspect angle to cover uncertainties in aspect-angle es-timation. Figure 2 shows the effects of matching radarcross section. For an N-class classifier, we constructthe MSE match between each test profile and all tar-get classes being considered, which results in an MSEvector of length N.

Bayesian Classifier

With the MSE as the basic metric for similarity be-tween each profile under test and each hypothesizedtarget class, the task of classification involves an infer-ence step from a measured MSE vector; i.e., given anMSE vector, how likely is it that the target under testcorresponds to one of the target classes in the tem-plate set? We have employed a form of Bayesian infer-ence, which constructs a posterior probability from acombination of prior expectations and conditionalmeasurements, or likelihoods.

Successful application of Bayesian inference in-

FIGURE 1: Comparison of a beamspace high-definition imaging (BHDI)–processed synthetic aper-ture radar (SAR) image of a Scud transporter erector launcher (TEL) against a baseline image of thesame Scud TEL formed by using a conventional two-dimensional Taylor-weighted fast Fourier trans-form (FFT). The Taylor-weighted FFT image on the left has 0.5 m × 0.5 m resolution. The image on theright has been BHDI-processed to an approximate 0.25 m × 0.25 m resolution. The BHDI-processedimage is much sharper than the FFT-processed image as a result of speckle noise reduction.

140

120

100

80

60

40

20

Baseline (Taylor-weighted FFT) BHDI

20 60 100 1408040 120

140

120

100

80

60

40

20

20 60 1208040 100 140

Cross range (pixels) Cross range (pixels)

Ran

ge (p

ixel

s)

Page 6: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

280 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

volves knowing something about the statistics of theMSE measurements. What is needed is the probabil-ity that a given MSE measurement is produced from aknown target class. To construct such a probability,we employ training statistics, passing profiles fromknown vehicles through the MSE calculation andcompiling statistics. Since the MSE vector is positivedefinite, and not easily modeled, we have computedstatistics on the log of the MSE, which enables theMSE vector to be modeled statistically as a multivari-

ate Gaussian distribution. Figure 3 illustrates the pro-cess of forming log-MSE statistics and constructing amultivariate Gaussian model. In this figure we com-pile log-MSE statistics for M1 and M109 test targetvehicles and a set of other vehicles, using templatesfor the M1 and M109.

The training statistics themselves represent the re-sults of computing log-MSE values for a number ofvehicles, over a number of conditions chosen to repre-sent a test ensemble. Each point in the figure is the

Without RCS alignment

–20

–25

–30

–35

–40

–45

–50

–55

Pow

er (d

B)

Range (pixels)

50 15 25 352010 30 40

Test profile (20-dB SNR)Template (20-dB SNR)

Log(MSE) = 0.721

–20

–25

–30

–35

–40

–45

–50

–55

Pow

er (d

B)

Range (pixels)

50 15 25 352010 30 40

With RCS alignment

Log(MSE) = 0.636

Test profile (20-dB SNR)Template (20-dB SNR)

0.50 1.0 1.5

Log (M109 MSE)

M1M109

Other

2.0 2.50.50 1.0 1.5

Log (M109 MSE)

2.0 2.50

0.5

1.0

1.5

2.0

2.5

0.50 1.0 1.5

Log (M109 MSE)

Probability density of other

Log

(M1

MS

E)

2.0 2.5

Probability density of M109 Probability density of M1

FIGURE 2. The graph on the left shows the resulting test and template profiles after they have been correctlyaligned in range and coarsely aligned in radar cross section (RCS). The graph on the right shows the resultingtest and template profiles after completion of the RCS alignment procedure, with a correspondingly lower valueof the mean square error (MSE) metric.

FIGURE 3. Training statistics for a two-class classifier, in which the template set consists of M1 and M109 test target vehicletemplates. The other class in the figure on the left consists of distinct vehicles similar in size to the test vehicles. The probabilitydistribution function was modeled as a multivariate Gaussian distribution of the log of the MSE.

Page 7: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 281

log-MSE for a given vehicle as measured against eachof the filters for the test set. Figure 3 represents a two-class classifier, so each measurement is a point in thetwo-dimensional log-MSE space. With the ensemblesuitably chosen, we arrive at a cloud of points repre-senting MSE measurements under similar conditions.

These results illustrate a number of key points.First, even a vehicle matched to the template set yieldssignificant scatter in log-MSE. This scatter is due inpart to the intentional use of distinct vehicles for tem-plate construction and training, and in part to the in-tentional mismatch in collection geometry. These in-tentional mismatches are intended to represent thevariability that we might encounter in practice whentrying to match a profile collected in the field withpreviously stored templates. Another key point isthat, in order to apply Bayes rule, we must estimate aprobability distribution function for the scatter ofmeasurements. Since the MSE is positive definite, itis not easily modeled by a simple estimation proce-dure. Modeling instead the log of the MSE alleviatesthis problem, and for purposes of classification a mul-tivariate Gaussian appears to be a reasonable repre-sentation of the MSE vector for known vehicles.

Decision Regions

Figure 3 suggests that the training statistics for a giventarget class in an N-class classifier can be readily repre-sented by an N-dimensional Gaussian random vari-ate. The training statistics lead to a functional formfor each target class, as represented in Figure 3 by thecontours for the estimated Gaussian forms. Now,when an unknown profile is encountered, we first cal-culate a log-MSE vector for the profile against each ofthe test templates. Then, by using the training statis-tics, we evaluate the likelihood that the profile origi-nated from each of the test target types, as well as thelikelihood that the test profile corresponds to a ve-hicle outside the classification set. Application of thewell-known Bayes rule [7] then converts these likeli-hoods, using prior probabilities, into posterior prob-abilities that the test template originated from each ofthe classification types or from an out-of-class vehicle.

With Bayes rule, the prior probabilities canstrongly determine the estimated posterior probabili-ties. First, let us continue with the M1-M109 two-

class classifier as an example. There are two filters—M1 and M109—but there are three questions that weneed to ask: is the target under test an M1, an M109,or neither? Setting the prior probability of neither(designated the other class) to zero forces the classifierto choose either an M1 or an M109 for any targetprofile under test. With equal prior probabilities foreach of these two classes, Figure 4 shows the posteriorprobability that the target is an M109, as a functionof the log-MSE measured from a given profile. Theseparation between the target decision regions isclearly evident.

Generally, we do not want to force a decision be-tween one or another among a small number of targetclasses. More likely, most vehicles under observationare not one of the types being classified. By varyingthe prior probability of the other class, we can rejectsuch vehicles. Figure 5 shows the effect of varying theother prior probability, or prior, on the decision re-gions. For even small other priors, MSE vectors thatlie outside the main distribution of the M1 andM109 training distribution are rejected, allowing theclassifier to correctly reject targets outside of the ve-hicle classes for which it has templates.

1.0

0.8

0.7

0.9

0.6

0.5

0.4

0.3

0.2

0.1

0

2.5

2.0

1.5

1.0

0.5

00 1.5 2.51.00.5 2.0

Log (M109 MSE)

Log

(M1

MS

E)

M1

M109

FIGURE 4. Posterior probability for the M1, assuming thatthe other prior is zero. Setting the other prior to zero reducesthe three-class decision regions into the decision regionsfor the M1 versus the M109. If a feature vector lies in the redregion, we classify the test profile resulting in that featurevector as an M1. Similarly, if a feature vector lies in the blueregion, we classify the test profile resulting in that featurevector as an M109.

Page 8: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

282 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

HRR ATR Measure of Performance

In this section, we present target-identification per-formance results of the ATR classification methodsdescribed in the previous section. To graphically dem-onstrate the performance of the ATR classifier, weborrow the concept of a receiver operating character-istics (ROC) curve from communication theory, asshown in Figure 6. The y-axis on the ROC curve dis-plays the probability of correct classification (Pcc) of atarget of interest, while the x-axis shows the probabil-ity of false classification (Pfc) of a target from theother class as a target of interest. The Pcc is the prob-ability that the targets of interest are correctly classi-fied among themselves. The Pfc is the probability offalsely classifying one of the targets in the other class asone of the targets of interest. Note that an upwardshift of the ROC curve corresponds to an improve-ment in Pcc, while a shift of the ROC curve toward

the left corresponds to a reduction in Pfc. In otherwords, any shift toward the upper left corner pertainsto an improvement in ATR performance.

Description of the Data

The raw HRR profiles used in these studies wereformed from SAR imagery provided to Lincoln Labo-ratory by Wright Laboratories, Wright-Paterson AirForce Base, Dayton, Ohio. These data were collectedin 1995 and 1996 by the Sandia National LaboratoryX-band (9.6 GHz) HH-polarization SAR sensor insupport of the DARPA-sponsored MSTAR program.

The data set contains SAR images of forty-fourmilitary vehicles that were imaged in spotlight modeat 15° and 17° depression angles over 360° of aspectangles. Figure 7 shows ground-truth photographs ofthe twenty-three distinct MSTAR targets from collec-tions 1 and 2 combined. The target set includes sixdistinct vehicles that are of interest to the classifierand seventeen vehicles that are not. We refer to the setof targets not of interest to us as the other class. Of the

2.5

2.0

1.5

1.0

0.5

00 1.5 2.51.00.5 2.0

Log (M109 MSE)

Log

(M1

MS

E)

M109

M1

Other

Other prior = 0.0

Other prior = 0.99

Other prior = 0.98

Otherprior = 0.1

Pcc

Pfc

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Direction of reduction of Pfc

Direction of improvement in Pcc

Baseline

FIGURE 5. The contour lines in this figure represent the de-cision boundaries between the three classes as a prioriknowledge about the other class varies from 0 to 1. The cen-ter line, which corresponds to an other prior of 0, is the samedecision boundary shown in the previous figure. As theother prior increases from 0 to 1, the decision region for theother class increases at the expense of the M1 and M109 de-cision regions. Note that the M109 decision region com-presses much more quickly than that of the M1, indicating ahigher risk for an M109 high range-resolution profile than foran M1 to be classified as an other type.

FIGURE 6. Receiver operating characteristics (ROC) curvessuch as these are used to graphically display automatic tar-get recognition (ATR) performance results. The probabilityof false classification (Pfc) represents the probability offalsely classifying targets not of interest as a target of inter-est. The probability of correct classification (Pcc) denotesthe probability of correctly classifying among the targets ofinterest. Since the objective of the ATR is to improve thePcc while at the same time reduce the Pfc, ATR performanceresults are improved by moving the ROC toward the upperleft corner.

Page 9: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 283

twenty-three targets in both classes, we have threevariants each of the BMP2 armored personnel carrierand the M2 Bradley fighting vehicle, eleven variantsof the T72 Russian tank, five variants of the M109self-propelled howitzer, and four variants of theBTR70 armored personnel carrier. The variants of theM2, BMP2, M109, and BTR70 vehicles have minordifferences between their configurations. The elevenT72 vehicles, however, vary significantly in configu-rations: (1) four T72s are fully intact without any fuelbarrel mounted; (2) two T72s are fully intact withbarrels mounted on the rear; (3) two T72s are missingskirts along the side but have fuel barrels mounted;(4) two T72s are missing skirts and have no fuel barrelmounted; and (5) one fully intact T72 with fuel bar-rels has reactive armor mounted on the exterior.

We used 17° depression-angle images of the six tar-gets of interest taken from collection 1 to constructthe six HRR classifier template profiles: BMP2#1,M109#1, T72#1, M1, M110, and M113. A total of

128 images of each of the six targets, covering 360° ofaspect, were used to construct the HRR templates.For each SAR chip imaged over a 3° aspect look, weformed three 1° HRR templates by dividing the SARimage into three contiguous non-overlapping subsets.Each subset contains complex HRR profiles collectedover a continuous 1° aspect look.

The HRR test profiles were generated fromMSTAR target images taken at a depression angle of15°. We tested the HRR ATR classifier on all forty-four targets, covering 360° of aspect angles. Targetstaken from the other class were used as “confuser” ve-hicles and thus were not part of the template data-base. Ideally, we expect the ATR classifier to classifythese confuser vehicles as an other type.

Summary of HRR ATR Results

We now summarize the effects of the SNR, the num-ber of CPIs, and the range resolution on ATR perfor-mance results for HRR profiles conventionally pro-

M1 BMP2 M109

M110 M113

Six-class classifier vehicles

M2 BTR60 BTR70 M548

M35 HMMWV 2s1-gun BRDM2

D7-Bulldozer M520 M577 M60

M88 M978 T62

ZSU23-4

ZIL131

Other vehicles

T72 variants

T72

FIGURE 7. The Moving and Stationary Target Acquisition and Recognition (MSTAR) collection 1 and 2 data setconsisting of twenty-three distinct vehicles, six from the classifier target class and seventeen from the other class.Our six-class classifier is constructed from the M1, BMP2, M109, M110, T72, and M113 target vehicles shown in theupper left of the figure. This target set consists of three distinct models of the BMP2, four of the M109, and elevenof the T72. Of the eleven T72 vehicles, some have fuel barrels mounted on the back with skirts on the side, somehave fuel barrels and no skirt, some have skirts and no fuel barrel, some have no skirt and no fuel barrel, and onehas reactive armor mounted around the exterior. Four of these T72 variants are shown above. The other class of ve-hicles, representing targets we do not care about, consists of the seventeen targets shown on the right.

Page 10: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

284 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

cessed by using a Taylor-weighted FFT, HDVI, andBHDI. Henceforth, we refer to the Taylor-weightedFFT as the baseline FFT method and HDVI as thebaseline HDVI method.

First we show the gain in target identification per-formance with the new superresolution BHDImethod, as compared against the conventional FFT-based method. Figure 8 compares the ATR results ob-tained with the BHDI-processed 20-dB SNR data at1-m and 0.5-m range resolution against results ob-tained by using HDVI-processed and baseline FFTprocessed data. At 20-dB SNR the baseline HDVIand FFT methods provide comparable performanceresults. Figure 8(a), however, shows that the BHDI-processed data provides, on average, a 10% improve-ment in Pcc at all values of Pfc compared to data pro-cessed with the baseline HDVI and with the baselineFFT. Figure 8(b) expresses a similar improvement inperformance with the BHDI method at the 0.5-mrange resolution. In the 0.5-m resolution case, we seethat the BHDI-processed data provide approximately5% improvement in Pcc over the baseline HDVI–processed data and 8% improvement over thebaseline FFT–processed data.

Moreover, at 20-dB SNR, the BHDI-processeddata at 1-m range resolution exhibits performance

comparable to the baseline HDVI–processed andbaseline FFT–processed 0.5-m range resolution data.In light of these results, we can expect a performanceimprovement equivalent to doubling the resolutionby applying BHDI, compared to the two baselinemethods.

In addition to providing improved performance ata lower SNR, BHDI also reduces the degree of systemresources needed, as measured by, for example, thenumber of CPIs needed to form an HRR profile. Fig-ure 9 compares BHDI-processed profiles with base-line FFT–processed profiles at 20-dB SNR, using fiveand three CPIs. The results indicate that when thenumber of CPIs is reduced from five to three, theprobability of correct classification with the baselineFFT–processed HRR profiles degraded by 8% on av-erage. With BHDI-processed HRR profiles formedfrom three CPIs, we obtained a performance resultequivalent to the baseline FFT–processed profilesformed from five CPIs. Effectively, when BHDI isused, we can expect a performance improvementequivalent to nearly twice the number of CPIs used toform the HRR profiles, as compared to the baselineFFT method.

BHDI also saves radar system resources comparedwith HDVI. Initially, BHDI was developed to save

BHDIBaseline HDVIBaseline FFT

Pcc

Pfc

0.5-m resolution

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

BHDIBaseline HDVIBaseline FFT

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00

Pfc

1.0-m resolution

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Pcc

(a) (b)

FIGURE 8. Classifier performance results for (a) BHDI–processed 1-m and (b) 0.5-m HRR profiles at 20-dBSNR processing compared against data processed with baseline HDVI and baseline FFT. The profiles wereobtained by processing five coherent processing intervals (CPI). The BHDI-processed data provide ap-proximately 5% improvement in Pcc over the baseline HDVI–processed data and 8% improvement in Pccover the baseline FFT–processed data at 0.5-m range resolution.

Page 11: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 285

on computational requirements, i.e., reduce the num-ber of operations needed to form an image. In addi-tion, however, we have found that BHDI reducesradar resources while providing improved target-identification performance. For example, Figure 10shows the performance results comparing BHDI-pro-cessed 20-dB and 25-dB SNR data at 1-m and 0.5-mrange resolution against data processed with thebaseline HDVI method. The results show that at 25-

dB SNR, BHDI provides approximately 5% im-provement in Pcc at all values of Pfc. Moreover, atboth the 1-m and 0.5-m range resolution, the appli-cation of BHDI at 20-dB SNR provides a perfor-mance equivalent to the data at 25-dB SNR processedwith the baseline HDVI method. Consequently, wecan obtain a performance improvement equivalent to5 dB of SNR by applying BHDI, compared to thebaseline methods.

BHDI (5 CPls)Baseline FFT (5 CPls)BHDI (3 CPls)Baseline FFT (3 CPls)

Pcc

Pfc0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00

Pfc

0.5-m resolution1.0-m resolution

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Pcc

BHDI (5 CPls)Baseline FFT (5 CPls)BHDI (3 CPls)Baseline FFT (3 CPls)

(a) (b)

BHDI (25-dB SNR)Baseline HDVI (25-dB SNR)BHDI (20-dB SNR)Baseline HDVI (20-dB SNR)

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00

Pfc

1.0-m resolution

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Pcc

Pcc

Pfc

0.5-m resolution

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

BHDI (25-dB SNR)Baseline HDVI (25-dB SNR)BHDI (20-dB SNR)Baseline HDVI (20-dB SNR)

(a) (b)

FIGURE 9. Classifier performance results for (a) BHDI-processed 1-m and (b) 0.5-m HRR profiles formedfrom three and five CPIs, compared against data processed with the baseline FFT method. The test profileswere obtained at 20-dB SNR.

FIGURE 10: Classifier performance results for (a) BHDI-processed 1-m and (b) 0.5-m HRR profiles at 25-dBSNR and 20-dB SNR compared against data processed with the baseline HDVI method. The test profileswere obtained from five CPIs.

Page 12: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

286 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

Figures 8 through 10 contain results for six classesof vehicle types whose lengths and widths vary from6.5 to 7.5 m and 3 to 4.5 m, respectively. To identifythese targets among each other by using HRR profileswith such comparable range extents, the one-dimen-sional classifier must rely solely on the scattering fea-tures from the profiles. In the absence of confuser ve-hicles (other targets), we can expect the Pcc to reach0.8 and 0.9 for the 1-m and 0.5-m range-resolutionprofiles at 20-dB SNR, respectively. The results dem-onstrated in Figures 8 through 10 include the pres-ence of confuser vehicles with lengths and widthscomparable to the six classes under test. Unfortu-nately, identifying targets with such comparablerange extent with high confidence is difficult. Wetherefore expect the Pcc to deteriorate significantly asthe Pfc of the other class improves.

We can experimentally verify this assertion by in-cluding, into the template database, HRR profiles ofa Scud TEL and M978 Hemmt truck whose lengthsand widths exceed 10 and 2.5 m, respectively. Thesetargets have twice the range extents of the six vehicleclasses used to generate the results shown in Figures 8through 10. After testing the HRR classifier with theaddition of the Scud TEL and M978, we can see inFigure 11 that the Pcc of the resulting eight-class clas-sifier improves on the six-class classifier discussed ear-lier by approximately 10% in the regime of low Pfc (<0.2). The improvement mentioned comes from aver-aging individual Pcc results from the original sixclasses with the high Pcc results from the Scud TELand M978. The black and green curves show indi-vidual results for the Scud TEL and M978, respec-tively, indicating discrimination of these long targetswell exceeding 95% Pcc. In the Pfc < 0.01 regime, theclassifier performance degrades significantly as a re-sult of setting the other prior probability close to one(hence targets’ prior probabilities close to zero).

In many battlefield situations, only very high-valuetargets such as a Scud TEL may be of interest to thecommander. Given this presumption, we may use theone-class classifier to discriminate a Scud TEL fromthe rest of the world. Figure 11 contains performanceresults of a Scud TEL discriminator using HRR pro-files at 1-m range resolution and 20-dB SNR. At Pfcvalues greater than 0.3, the classifier can identify a

Scud TEL from the other targets almost perfectly.However, the performance degrades considerably asthe Pfc improves to below 0.3. We suspect two con-tributing factors to the observed degradation. First,the M978 Hemmt acts as a confuser vehicle, verysimilar in range extent to the Scud TEL. With noth-ing but training statistics for different vehicles, theclassifier does not have a good criterion for rejectingthe M978. Second, near broadside the MSE valuesfrom all vehicles tend to collapse into a commonvalue, so that classifier performance is much poorerfor near-broadside aspects than for other aspects.

Other Class Information

At this point, we need to make a slight diversion backinto Bayes decision theory. We have come across thesituation in which a confuser vehicle prevents theclassifier from simultaneously classifying a Scud TELas a TEL, and rejecting a similar length M978 as anon-TEL. The solution is twofold. The first part ofthe solution is, not surprisingly, that it is necessary to

Pcc

Pfc

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Scud TELM978 Eight-class classifierSix-class classifier without Scud TEL and M978

FIGURE 11. Classifier performance results for BHDI-pro-cessed 1-m HRR profiles at 20-dB SNR. The red curve repre-sents previous results obtained for the six-class classifier(with targets M1, T72, M109, M110, BMP2, and M113), whilethe blue curve shows the eight-class classifier performanceresulting from inclusion of the Scud TEL and the M978 intothe six-class classifier. The black and green curves show in-dividual results for the Scud TEL and the M978, respectively,indicating that the classifier has no trouble discriminatingthese two new targets from vehicles in the other class.

Page 13: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 287

add a template for the confuser, if it exists, to the clas-sifier set. If a precise template is not available, it is stillbeneficial to generate a surrogate confuser to use inthe template set. The second part of the solution willinvolve modifying how the Bayes decision rule ishandled.

Having a confuser template adds information thatwe would expect to improve the ability to distinguishbetween a target of interest and a target that onewishes to reject from classification. This additionalinformation plays a role in the Bayes decision processby adding additional dimensionality to the problem.Rather than simply rejecting a target as an other class,which is rejection based only on targets we wish toclassify, we can reject a target on the basis of the addi-tional information that the target under test in factyields a good MSE match to a confuser template.When handled correctly, such information always im-proves the ability of the classifier to perform.

In evaluating the efficacy of adding additional in-formation to the classification, we need to be carefulto keep the criteria for success and failure constant aswe add information. This careful attention plays intwo places in the decision process—when formulat-ing the decision and when scoring the result. The de-cision process is first obviously changed by the addi-tion of the other class template, as we would expect.The decision process is next changed by splitting thea priori probability for the other class between the tar-gets without templates and the confuser for which atemplate is provided. The second place in which theadditional information affects the decision process ismore subtle but, in hindsight, more obvious: the cri-teria for success should not change. Thus, for ex-ample, if we consider a single-class Scud TEL detectorand add an M978 filter to aid in the rejection of thissimilar length vehicle, then the criterion for successwill still be whether the TEL is properly classified as aTEL, and if a non-TEL is properly identified as anon-TEL. If an M978 is improperly classified as another type of vehicle, or an other type of vehicle is im-properly identified as an M978, then this classifica-tion is not an error.

When the classifier is provided additional informa-tion about the M978 by using the approach describedabove, we see that the classifier Pcc performance im-

proves dramatically, as shown in Figure 12. For asimple single-class classifier, for the example shown,the classifier achieves a probability of detection of80% while correctly rejecting 80% on non-TELs.Here, note that for a single-class classifier, the prob-ability of detection is synonymous with Pcc. With in-clusion of the M978 filter, the 80% probability of de-tection is achieved while rejecting 90% of non-TELs.As more and more confuser filters are included, theclassifier approaches perfection. Additionally, sincethe widths of all the vehicles are comparable, espe-cially on the scale of the 1-m resolution used, signifi-cant and unavoidable confusion occurs at broadsidelooks. When classification in the context of a trackeris applied, the classifier does not effectively updatenear-broadside looks when all targets are indistin-guishable. Simulating this situation for the examplehere, we exclude targets near broadside, and find thesimple TEL detector can achieve a 90% probability ofdetection with a 93% rejection of other vehicles, andadding the M978 improves this result to 90% prob-ability of detection and 97% rejection of non-Tels.

FIGURE 12. The dotted blue curve shows the results of theone-class classifier trying to discriminate a Scud TEL fromall other targets. When provided with additional knowledgeof the M978 vehicle, the classifier performance improves by20% in the Pfc < 0.2 regime. If profiles within ±30° of broad-side are removed, the performance improves by approxi-mately 35% in the Pfc < 0.2 regime. As knowledge of addi-tional vehicle targets are added to the classifier, the ScudTEL discriminator performance approaches the desired per-formance result.

Pro

babi

lity

of c

orre

ct c

lass

ifica

tion

Probability of false classification of other

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

TEL (M1, T72, M109, M110, M113, BTR70, M978)TEL (M1, T72, M109, M110, M113, BTR70)TEL (M978 results, no broadside)TEL (no additional info, no broadside) TEL (M978)TEL (no additional info)

Page 14: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

288 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

Feature-Aided Tracking

For moving targets, target identification cannot beconsidered separately from tracking, since the targetsare in motion, and target identification becomes rap-idly perishable information if the target is not kept intrack. As was discussed above, there are three primarytarget signatures that are useful for identification: (1)the HRR range profile that is visible when the targetis in motion, (2) the ISAR signature that is visiblewhen the target undergoes a turn, and (3) the SARsignature that is visible when the target stops. Foridentification using HRR, several looks at the targetfrom different aspect angles are necessary to arrive at ahigh-confidence identification, requiring the trackerto maintain target identity over the accumulation pe-riod for the classification evidence to be properly ac-cumulated. Since ISAR and SAR both yield a two-di-mensional signature, we would expect a greaterclassification confidence than with HRR for each ob-servation. But again the tracker must properly link re-ports and maintain track for such identification to beof anything but a transient nature.

We have pursued the viewpoint that identificationinformation can be accumulated from a number ofsignatures, be it HRR, ISAR, or SAR. Linking thisinformation together requires a tracker that not onlycan fuse different identification types such as HRR,ISAR, and SAR, but ideally one that can also take ad-vantage of this non-kinematic information to im-prove tracking.

The objective of a tracker is to maintain track onmoving targets, and to estimate the target state. A ki-nematic tracker produces estimates of the track state(e.g., position, velocity, and heading) based on radarmeasurements such as range, range rate, angle, andangle rate, by correctly associating new detectionswith existing target tracks assuming appropriate kine-matic transition models. A feature-aided trackerbuilds upon the traditional kinematic tracker anduses additional target features, such as the HRR pro-file and target classification information, to help im-prove this association.

Feature-aided tracking encapsulates a suite of algo-rithms, consisting of signature-aided tracking (SAT),classification-aided tracking (CAT), and classification

diffusion-aided tracking (CDAT). SAT uses the de-gree of similarity between the HRR profiles obtainedfrom successive detections of the target. No inferenceis made in regard to the identification, or type, of thevehicle under track. In other words, SAT addressesthe question of whether the target at the current timeis the same as the target from an earlier time. In con-trast, CAT uses the degree of similarity between theidentification associated with the current report andan existing track; i.e., if the identification of the re-port and track are similar, then the report and trackare more likely to be associated. Finally, CDAT at-tempts to mitigate the degradation in target identifi-cation that occurs when the tracker associates a reportwith the wrong track [15]. The approach of CDAT isto blend prior classifications from tracks that are ki-nematically ambiguous so that built-up classificationinformation is not completely associated with thewrong report.

This article focuses on the more mature feature-aided tracking methods, namely, SAT and CAT, anddefers further discussion of CDAT until a later date.In the next four sections we describe SAT and CAT ingeneral terms and present initial results that stronglysuggest that sufficient additional benefit can be ob-tained from feature-aided tracking.

Signature-Aided Tracking Algorithm and Architecture

For the purposes of feature-aided tracking analysis,we use a kinematic tracker previously developed atLincoln Laboratory in the 1990s to support a small,lightweight radar payload located on an unmannedair vehicle, or UAV [16]. The focus of this article isnot on the details of this particular tracker, but ratheron how the feature-aided tracking algorithms fitwithin this very general type of tracker architecture.In general, a tracker receives M reports from a sensorand attempts to find the best match between these re-ports and N existing target tracks. (For simplicity, weignore any discussion of how the tracks are initiatedand begin our discussion with the tracks already inexistence.) We briefly walk through the basic steps ofa generic kinematic tracker to make clear how the fea-ture-aided tracking algorithms are added.

First, for a given hypothesized track-report pair,the tracker determines if the track and report are suf-

Page 15: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 289

ficiently “close” in a kinematic sense such that theymay be plausibly associated. If so, the track is propa-gated (i.e., extrapolated) to the time of the report. Anon-negative score is then calculated on the basis ofthe degree of similarity between the kinematic statesof the track and report. This process is repeated for alltracks and reports from the current radar scan, gener-ating a matrix of scores—one number for each plau-sible track-report pairing. To determine the correctassociations, we use an optimization algorithm tofind the best pairings among all the viable hypotheses.We now discuss the problem that SAT tries to solve,our approach for solving this problem, and the archi-tecture for implementing this solution.

Consider the simple scenario shown in Figure 13as an illustration of the problem, where, at time t, asshown at the left, two targets are widely separated andlocated on different branches of the road. We arbi-trarily designate one of the vehicles as an M1 (shownin blue), and it is located on the top branch of theroad with aspect angle θ1. The other vehicle we desig-

nate as an M109 (shown in red) with aspect angle θ2.The corresponding HRR profiles for the two vehiclesare also shown, with the scattering differences be-tween the two clearly visible. At a later time, the twovehicles have merged on the road. Using kinematicinformation alone, we have an approximately 50%chance of obtaining the correct association. However,if we use the HRR profiles obtained from the targetsat time t + ∆t, as shown on the right in Figure 13, andcompare with the target profiles at time t, we expect agreater chance of getting the association correct; i.e.,the degree of similarity between the track and reportHRR profiles will help resolve the ambiguity of thetrack-report association. The degree of similarity be-tween the two profiles at t + ∆t with the two profilesat earlier time t (four possible cases) is used to helpdetermine whether the M1, for example, is the lead ortrailing vehicle at time t + ∆t. We now discuss the par-ticulars of the algorithm in some detail and explainhow to modify the kinematic tracker for SAT.

The main assumption of SAT is that if the HRR

0 10 20 30 40

–10

0

–30

–20

M1

Pow

er (d

B)

Range (pixel)

M109

0 10 20 30 40

Range (pixel)

–10

0

–30

–20Pow

er (d

B)

M1

M109

Radar line of sight

θα

θ

–10

0

–30

–20

0 10 20 30 40Range (pixel)

Pow

er (d

B)

–10

0

–30

–20

0 10 20 30 40Range (pixel)

Pow

er (d

B)

–10

0

–30

–20

0 10 20 30 40Range (pixel)

Pow

er (d

B)

–10

0

–30

–20

0 10 20 30 40Range (pixel)

Pow

er (d

B)

Compare signatures

TrackReport

TrackReport

TrackReport

TrackReport

2

11 2α

FIGURE 13. Tracking of two targets—an M1 (upper left) and an M109 (lower left), where each starts out moving on a separateroad. The goal of the tracker is to associate the M1 track to the M1 report and the M109 track to the M109 report. This associationproblem is unsolvable, given only the targets’ kinematics information. We can supplement the kinematics information with thetargets’ HRR profiles to provide target identification and resolve the ambiguity in the kinematics.

Page 16: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

290 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

profiles of the track and report indeed come from thesame vehicle, then the corresponding differences inthe profiles, as measured by the MSE score, should besmaller than if the HRR profiles came from two dif-ferent vehicles. How do we characterize these differ-ences? Using the DARPA MSTAR vehicles, we com-puted MSE scores for all possible pairings of vehiclesfor all aspect angle differences less than 5°. (We foundthat for angles greater than approximately 5° it ismore difficult to distinguish between vehicles withsimilar sizes only on the basis of their HRR profiles).The HRR profiles used to generate the above men-tioned results have 1-m range resolution and 20-dBSNR, and were formed by using the superresolutionBHDI method developed by Nguyen et al. [8] at Lin-coln Laboratory.

We observed that the so-called matched scores (thevehicles are the same) and mismatched scores (the ve-hicles are different) are each normally distributedwith the match mean, as expected, smaller than themismatched mean. The variance of the mismatch casewas slightly larger than the match value. In the fol-lowing discussion we represent the match and mis-match mean by µmatch and µmismatch , respectively, andthe match and mismatch standard deviation by σmatchand σmismatch , respectively. Consequently, given an

MSE score for a track-report pair, we can calculate thelog-likelihood ratio by using the expressions for theprobability density function (pdf )

pdf x e

pdf x e

matchmatch

x

mismatchmismatch

x

matchmatch

mismatchmismatch

( )

( )

=

=

− −( )

− −( )

1

2

1

2

1

2

1

2

22

22

πσ

πσ

σµ

σµ

to obtain a signature χ2 score,

χ

µ

σ

µ

σ

signature

mismatch

mismatch

match

match

x

x x

2

2

2

2

2

12

( )

.

=

−−( )

−−( )

The signature χ2 value is then added to the kine-matic-based χ2 score to obtain the total SAT χ2 score.

Figure 14 shows the Gaussian probability densityfunctions and corresponding χ2 scores as a function ofMSE scores. As expected, the signature χ2 score in-creases monotonically with MSE score. Recall thatthe larger the χ2 score, the less likely the track and re-port signatures belong to the same target.

1.8

1.6

1.4

1.2

1.0

0.8

0.6

0.4

0.2

00 0.5 1.0 1.5 2.0 2.5

Log (MSE)

0 0.5 1.0 1.5 2.0 2.5

Log (MSE)

14

12

10

8

6

4

2

0

–2

–4

–6

Log (MSE) → 2mismatch

mismatch match

match

χ2

Gau

ssia

n p

rob

abili

ty

den

sity

fun

ctio

n σ

µ

σµ

χ

FIGURE 14. The left graph shows the match and mismatch Gaussian probability density functions obtainedfrom training HRR profiles at 1-m range resolution and 20-dB SNR. We generate the training data used to esti-mate the match probability density function by comparing profiles from the same target but separated by up to5° in aspect. To generate the training data for the mismatch probability density function, we randomly comparedprofiles from two different targets. The negative log-likelihood ratio of match to mismatch is used to normalizean MSE score into a signature χ2 value. The right graph depicts the transformation of the log-MSE scores intoequivalent χ2 values, using the match and mismatch mean (µ) and standard deviation (σ) distributions shown inthe left graph. As expected, the χ2 values increase monotonically with log-MSE scores.

Page 17: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 291

We now present the architecture for implementingthe SAT algorithm. Figure 15 shows a high-level dia-gram of this architecture. As was discussed with thekinematic tracker, we assume that there are N existingtracks and M new detections (reports) at time t. Fur-thermore, for the sake of clarity and simplicity, we as-sume that each track and report has an HRR profile.Consider first a single hypothesized track-report pair.As before, using kinematic information alone, we de-termine if the track and report are sufficiently close sothat the hypothesized pairing is plausible; i.e., are therange, range rate, and angle, for example, sufficientlysimilar? If so, we project the kinematic state of thetrack forward to the time of the report and we com-pute the kinematic-based χ2 score. Since HRR pro-files are quite sensitive to target aspect angle, resulting

in a loss of target specificity after an aspect-anglechange of approximately 10°, we need to excludethose signature scores computed by matching trackand report signatures differing by more than 10° inaspect. To properly determine the aspect angle of a re-port, the tracker must calculate an updated estimateof the track aspect angle. We therefore assume for themoment that this track does associate with the reportand consequently we filter the report data with thetrack to obtain an updated estimate of the track state.

Using the updated state estimate, we determine theaspect angle of the report and ask the following ques-tion: has the aspect angle between the track and thereport changed too much such that SAT should notbe used? If the answer is yes, then we, as with the tra-ditional kinematic tracker, use only the kinematic in-

Extrapolate track

Compute kinematics

2

Report1 ReportM

Track1Hypothesize associationTrackN

(Tracki, Reportj)

Repeat for all possible track-report association pairing

Kalman filter and

estimate j

Munkres algorithm

| i – j| < threshold

Fusion of signature 2

kinematics 2

HRR, i, HRR

Matching metric (MSE calculation)

Signature comparison

False

True

Track association and HRR

Compute signature 2

θ

θ θ θ

θχ

χχ χ

FIGURE 15. The key components of the signature-aided tracker with HRR profiles as features. For each hypoth-esized track-report association, a maneuver is detected and a kinematics χ2 value computed from the extrapolatedtrack state. Each track and report contains, in addition to the kinematics information, an HRR profile with the trackalso containing an estimated aspect angle of its HRR profile. The Kalman filter then updates the state of the trackto the hypothesized report from which the report-HRR aspect angle is estimated. If the track and report-HRR pro-files differ by a large aspect angle, then the feature information is ignored. Otherwise, a signature χ2 is computedand fused with the kinematics χ2.

Page 18: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

292 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

formation to determine the score associated with thistrack-report pair. If, however, the track has undergonea change in aspect angle that is sufficiently small, thenwe calculate the signature χ2 score, which is thenadded to the kinematic χ2 value to obtain a total χ2

score. Repeating this process for all possible track-re-port pairings, and using proper normalization of thescores to account for the different number of degreesof freedom, we obtain an association matrix fromwhich the final track-report pairings are made by op-timizing for this tracker with Munkres algorithm.

Discussion of SAT Performance Results

We now present some initial SAT results that indicatethe level of improvement to expect in tracker perfor-mance. To obtain these results, we use instrumentedground vehicles, simulated target detections, andHRR profiles derived from SAR imagery. The vehiclemotion data were collected for the DARPA Afford-able Moving Surface Target Engagement (AMSTE)program at Patuxent River Naval Air Station, Mary-land, in 1999, by using Global Positioning System(GPS)–instrumented vehicles. The sensor detectionswere derived from the GPS data by adding sensornoise and measurement error consistent with a typicalradar sensor. The radar scan rate used to generate thedetections was set to five seconds, which is consistentwith a small-region-of-interest rapid-scan mode. Theprobability of detection was set to 0.9. The MTI de-tections were augmented with HRR profiles formedwith the superresolution BHDI method [7] at 20-dBSNR with 1-m range resolution, using SAR imagescollected for the DARPA MSTAR program.

We consider a simple scenario consisting of twovehicles—an M1 and an M109—traveling along astraight road. For this scenario, there are 320 detec-tions with 130 of these detections resulting in poten-tial misassociations. Figure 16 illustrates one particu-lar example of the type of confusion that may occurfor this scenario. At times t and t + ∆t the M109 is infact ahead of the M1. However, from the perspectiveof the tracker, an equally plausible alternate hypoth-esis exists: the M1 overtook the M109 during thetime interval between scans and is now ahead of theM109. Figure 16 illustrates these two kinematicallyambiguous hypotheses.

Using kinematic information alone, the tracker gotthe vehicle identification wrong 21% of the time.That is, 27 times out of 130, the tracker paired thereports with the wrong tracks. In contrast, by usingSAT, the number of misassociations was reducedfrom 21% to 9%; i.e., the number of misassociationswas reduced by more than a factor of two. Figure 17illustrates these results. This simple example illus-trates the potential for SAT to help the tracker resolvekinematically confusing close encounters between ve-hicles. Clearly, additional work—e.g., more challeng-ing scenarios with an expanded set of tracker mea-sures of performance, such as track lifetime andcontinuity—are needed before a more quantitativemeasure of the added value of SAT can be stated withfull confidence. These results strongly suggest, how-ever, that additional effort in this area is warranted.

One-Dimensional SAT Variants

We have seen how HRR profiles can be used to im-prove tracker performance. HRR profiles are, how-ever, typically obtained from vehicles that are movingwith a range rate above the minimum detectable ve-locity along a relatively straight line. When a vehicleturns or stops, it is possible to obtain two-dimen-sional signatures that are more informative than HRR

Hypothesis 1

M1

M109

Truth

Hypothesis 2

M1

M109

FIGURE 16. A sample two-target tracking scenario consist-ing of an M1 and an M109 following each other closely. Twoposition measurements of each vehicle are shown in thisfigure. Hypothesis 1 depicts the true trajectory of the tar-gets, with the M1 following the M109. Alternatively, the M1could have passed the M109 between the two position mea-surements, which would result in a reversal of track identifi-cation, as shown in hypothesis 2.

Page 19: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 293

profiles. In this section, we present two variants ofSAT that make the best use of the vehicle signaturescollected while the vehicle is turning or stopped. Theproposed approach takes SAR and ISAR images, ob-tained while the target is stopped or turning, respec-tively, and collected at target aspect angle θ. We thengenerate HRR profiles at an angle θ + ∆θ, where ∆θ islarge (i.e., much greater than the few degrees wewould normally anticipate). The ability to generateHRR profiles at angles substantially different fromthe image collection angle results in a significant ex-tension of the range of applicability of SAT.

We first consider the case of HRR profiles derivedfrom a SAR image. The procedure for generatingHRR profiles for angles present in the SAR image are

well understood and commonly used. What is new,however, is that by performing a simple rotation ofthe SAR image, we can obtain representative HRRprofiles at angles outside the range of normally ex-pected values. There apparently are target features(e.g., dominant scatterers) present in the two-dimen-sional image that are sufficiently robust to these typesof operations (i.e., simple rotations), at least for thepurposes we are considering (one-dimensional SAT).For example, in Figure 18, we show the distributionof match and mismatch MSE scores for HRR profilesagainst HRR profiles derived from SAR images. Thematch scores are formed from the HRR profile of aScud TEL at angle θ against the HRR profile of aScud TEL derived from a SAR image at angle θ + 30°.The mismatch scores are obtained by comparing theHRR profile of a Scud TEL at an angle θ against theSAR-derived HRR profile of a M60 tank at aspectangle θ + 30°. The HRR profiles are at 1-m rangeresolution and 35-dB SNR. For the case consideredhere, the separation between the match and the mis-

FIGURE 17. (a) The two-target simulation contains a total ofapproximately 450 detections made at five-second updateintervals and a probability of detection of 0.9. Of the 450 de-tections, 130 of them could potentially result in misassocia-tions. (b) By using only kinematics information, the kinemat-ics tracker made 27 misassociations, or 20.8% of the 130possible misassociations. In signature-aided tracker (SAT)mode, the number of misassociations dropped by over halfto 12, or 9.2% of the possible misassociations.

FIGURE 18. The match and mismatch probability densityfunctions resulting from comparing HRR profiles of a ScudTEL against SAR-derived HRR profiles of a Scud TEL andan M60 tank, respectively. The aspect-angle separation be-tween HRR profile and SAR image was approximately 30°. Inthe match case, SAR images of the Scud TEL are rotated tothe aspect angle of the HRR profiles. By collapsing the ro-tated SAR image in range, we can form an HRR profile. Theresulting HRR profile is then matched against the given HRRprofile. In the mismatch case, we replace the SAR images ofthe Scud TEL with images of an M60. The HRR profiles usedin this case have a range resolution of 1 m and SNR of 35 dB.

0

50

100

150

200

250

300

350N

umbe

r of

asso

ciat

ions

Detections Kinematics incorrect

associations

SAT incorrect

associations

320

130

27 12

0

10

20

30

Per

cent

of

inco

rrec

t ass

ocia

tions

Kinematics SAT

Tracker mode

20.8%

9.2%

Potential misassociations

Correct associations

(a)

(b)

0 0.5 1.0 1.5 2.0 2.5 3.0

Log (MSE)

0

4

6

8

10

12

14

16

2

MatchMismatch

Page 20: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

294 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

match scores is clearly evident, and as a result it is ex-pected that the SAR-derived HRR profiles will pro-vide greater aspect-angle diversity, thus extending theapplicability of SAT.

For example, as shown in Figure 19, the separationbetween the mean of the match and mismatch scorespersists for a very large range of aspect angles, fargreater than could possibly have been anticipated.Naturally, these results are most likely reflective of thegreat dissimilarity between the Scud TEL and theM60 tank, and we fully expect the separation tolessen as the vehicles become more similar in size,while the depression angle is reduced to values moreconsistent with long-range surveillance geometries.

The same approach can, in general terms, be ap-plied to HRR profiles derived from ISAR images. Fig-ures 20 and 21 show the corresponding results. Al-though this method is not apparently as robust as theSAR-based method, there is still considerable separa-tion between the match and mismatch scores, whichpersists over the full range of angles from 0° to 180°.

In summary, target features in HRR profiles de-rived from SAR and ISAR images that have under-gone significant rotations (to align them with the cor-responding HRR profile) appear to be robust to anglediversity, thus significantly extending the applicabil-ity of SAT to more challenging scenarios.

16

12

8

4

0

–40 40 80 120 160 180

Aspect-angle difference (degrees)

2

Match

Mismatch

20 60 100 140

χ

15

10

5

00 0.5 1.01.5 2.0 2.5

Log (MSE)

MatchMismatch

8

6

4

2

0

–20 20 40 60 80 100 120 140 160 180

Aspect-angle difference (degrees)

2

Match

Mismatch

χ

FIGURE 19. The expected match χ2 values compared againstthe expected mismatch χ2 values for aspect-angle differ-ences from 10° to 180°. At all aspect-angle differences, the χ2

probability density functions show a notable separation be-tween match and mismatch.

FIGURE 20. The match and mismatch probability densityfunctions resulting from comparing HRR profiles of a ScudTEL against ISAR images of a Scud TEL and an M60, re-spectively. The aspect separation between HRR profile andISAR image was approximately 30°. We perform the matchand mismatch comparison in the same manner as in theHRR versus SAR mode. However, the ISAR mode intro-duces an additional complication in that the cross-rangeresolution must be estimated prior to rotation. The MSEcomputation, in this case, requires a search in cross rangefor the appropriate resolution.

FIGURE 21. The expected match χ2 values compared againstthe expected mismatch χ2 values for aspect-angle differ-ences from 10° to 180°. As shown, the mean match χ2 in-creases with the aspect-angle difference. As the aspect-angle difference increases, the separation between matchand mismatch χ2 probability density functions decreases.For aspect-angle differences greater than 60°, however, thematch distribution falls almost entirely within the mismatchdistribution. Consequently, a χ2 value near zero could haveresulted from either a match or mismatch comparison.

Page 21: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 295

Classification-Aided Tracking Algorithmand Architecture

Classification-aided tracking (CAT) uses informationabout the identification of the target to help thetracker resolve kinematic ambiguities [17, 18]. Un-like SAT, which uses the target signature directly todetermine if the target at time t + ∆t is the same as thetarget at an earlier time t, CAT goes one step furtherand determines if the target at the two times is thesame vehicle type. The degree in similarity betweenthe identification of the track and the identificationof the report is used to compute a CAT score, which,as with SAT, is added to the kinematics-based score.We can think of CAT as closing the feedback loop inthe tracker with classification; classification informa-tion on targets is built up with successive reports asso-ciated and accumulated in the tracker. With CAT, theclassification itself is used to improve the tracker asso-ciation performance, which in turn improves classifi-cation. In this section we present the CAT algorithmand a high-level architectural diagram illustrating theimplementation of the method, and we concludewith some preliminary but suggestive results.

As with SAT, CAT functions by modifying thepairwise χ2 error between reports and tracks. Theadded term is referred to as the CAT score, with asmall value designating a good match that increasesthe likelihood of association, and a large value indi-cating a poor match that decreases the likelihood ofassociation. To calculate the CAT score we assumethat we have an estimate of the identification of thetarget track; i.e., we have some prior knowledge of thetrack identification, built up from previous reports.The first step in calculating the CAT score is to com-pute an estimate of the identification of the target as-sociated with the sensor report. However, to performthis calculation, we must first have an updated esti-mate of the track aspect angle. As we did with SAT,we obtain a hypothesized estimate of the report aspectangle by updating the track state with the report in-formation, as if the track would associate with the re-port. We then match the report HRR profile withHRR profiles in the template library for angles nearthe estimated angle. We compute an MSE vector M,where each element of the vector corresponds to an

MSE score between the report HRR profile and thebest matching template HRR profile taken from eachclass type. By properly conditioning the MSE scoresas described earlier, we can approximately fit a multi-variate Gaussian distribution to the MSE vectorwhose first-order and second-order statistics havebeen pre-calculated during training. Using Bayes’rule, we can combine a priori knowledge of the trackidentification with the multivariate Gaussian likeli-hood generated from the MSE vector to calculate thea posteriori probability that the report corresponds toa target of a particular type. Given the set of a poste-riori probabilities, referred to as the classification vec-tor, we can now calculate the CAT score, as describedbelow.

To determine if track i should associate with reportj, we ask the following question: what is the likeli-hood that track i gave rise to (is connected with)report j on the basis of identification information?Basically, as the degree of similarity between the iden-tification (as measured by the classification vector) oftrack i and report j increases, it is more likely that thisparticular pair should be associated. To compute thislikelihood, we let the probability that track i gave riseto report j be given by the expression

P track report

P track L report

i j

n i

n

N

n j

( )

( ) ( | ) ,

→ ∝

=

+

∑1

1

M

where P trackn i( ) is the prior probability that track i isa target of type n, and where n = 1, 2, … N + 1. (TheN + 1 element is for targets of the other class.) Theterm L reportn j( | )M is the likelihood that a target forthe jth report gave rise to the MSE vector M when thejth report is compared with the templates from the Nclasses.

To illustrate the computation of the CAT score insimpler terms, we consider the following example. Letthe degree of the classifier N be 1; i.e., we are inter-ested only in determining if the target is of a singleparticular type, say a Scud TEL. Consequently, theN + 1 = 2 elements of the classification vector corre-spond to the probability that the target is a TEL or anon-TEL. For this case, the expression for the CATscore can be expressed as

Page 22: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

296 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

P track report

i

i other

other

i j( )

.

× ( )+ ( )

×

(probability that track is a TEL)

likelihood that a TEL gave rise to MSE score

probability that track is

likelihood that an gave rise to

MSE score

Given the value of P track reporti j( )→ , we computethe χ ID

2 score as

χ ID i jP track report2 = − →[ ]log ( ) ,

and subsequently add the term χID2 to the kinematic

score χkinematic2 to obtain the CAT score χCAT

2 .Figure 22 illustrates a high-level architectural dia-

gram for the CAT algorithm. The steps are nearlyidentical to SAT in that a feature-based score is com-puted and added to the usual kinematic score. Thedifference from SAT, of course, is that the score isnow derived from an estimate of the target identifica-tion. The steps of this block diagram have been de-scribed in detail in the earlier section entitled “Signa-ture-Aided Tracking Algorithm and Architecture.”

Discussion of CAT Performance Results

We now discuss some preliminary results using CAT.We consider a six-class classifier, in which the vehiclesof interest are M1, M109, T72, M110, M113,BMP2, and the other class, consisting conceptually ofall vehicles that are not members of the six classes. Asbefore, we use BHDI-formed HRR profiles at 1-mrange resolution and 20-dB SNR. Likewise, we usethe same radar simulation parameters (i.e., radarnoise, measurement error, and a scan rate of five sec-onds) as was done with SAT. We first examine targetidentification performance and subsequently showhow identification can be used to improve trackerperformance, as measured by the number of correcttrack-report associations, with the CAT method.

For the purposes of assessing identification perfor-mance, we consider a single-target scenario and desig-nate the target as an M109. The vehicle identificationof each track is characterized by a classification vector.For example, the first element is the probability thatthe target is an M1, the second that the target is anM109, and so on. In the top graph of Figure 23, theaccumulated a posteriori probability that the target isan M109 is shown as a function of scan number. A

(Tracki, Reportj)Priors HRR

Report1 ReportM

Track1Hypothesize associationTrackN

Repeat for all possible track-report association pairing

Track association and posterior probability

Extrapolate track

Compute kinematics

2

Kalman filter

estimate aspect j

Fusion ATR 2 and

kinematics 2

Munkres algorithm

Bayesian classifier

Compute ATR 2

Compute MSE vector

HRR ATR classifier

Templates

MSEstatistics

θ

χ

χ

χχ

FIGURE 22. The incorporation of the HRR ATR classifier into the kinematics tracker to form the classification-aided tracker(CAT). For each kinematically possible association between track and report, we compute the kinematics χ2 score as de-scribed previously in Figure 15.

Page 23: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 297

blue dot indicates that the largest value of the classifi-cation vector is the element corresponding to theM109. In contrast, red dots indicate those timeswhen the largest value of the classification vector be-longed to the other class. For this case, 74% of thetime the tracker correctly identified the target as be-ing an M109.

To help understand the possible source of errors inclassification, we examined the following parameters:the error in the aspect-angle estimate, the value of theaspect angle, and the speed of the vehicle. We antici-pate that the identification performance will degradeas the error in the aspect-angle estimate increases.Likewise, as the target approaches broadside, we alsoexpect performance to degrade because targets appearmore similar when viewed from the side. Further-more, as the speed of the vehicle approaches zero, therange rate will necessarily also approach zero, result-

ing in a poorer estimate of the target velocity and cor-responding heading, ultimately resulting in a poor es-timate of the target aspect angle. For the second andthird graphs in Figure 23, we observe that when thereis a large change in aspect angle, there is often a sig-nificant increase in the error of the estimated aspectangle, resulting in a decrease in target identificationperformance, as indicated by the increase in the num-ber of red dots. For example, for scan numbers be-tween 300 and 400, the vehicle is undergoing signifi-cant changes in aspect angle (third graph), resultingin large spikes in the error of the aspect-angle estimate(second graph), giving rise to a larger number of reddots (top graph). During this interval, the speed ofthe vehicle changed markedly, often nearing zero,which further exacerbated the estimation of the targetaspect angle. And finally, although not clearly evidentfrom this figure, due to the scale, it can be shown that

0 100 200 300 400 500 600 7000

100

200

Asp

ect e

rror

0

20

300 100 200 300 400 500 600 700

0 100 200 300 400 500 600 700

0

200

300

Asp

ect

Spe

ed

Scan number

0 100 200 300 400 500 600 7000

0.5

1P

oste

rior

100

10

Pcc = 0.74

FIGURE 23. The top graph contains the accumulated posterior probability of classification for the M109 as a functionof scan number. A blue dot in this graph indicates that the largest value in the classification vector for this scan corre-sponded to the M109. A red dot indicates that the largest value in the classification vector corresponded to the otherclass. On average, the probability of correctly classifying the M109 was 0.74. The second graph shows the aspect errorresulting from estimating the M109’s aspect angle from its heading. The third graph shows the estimated aspect angleof the M109. The bottom graph depicts the M109’s true speed as a function of scan number.

Page 24: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

298 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

the identification performance did indeed degradewhen the target was near broadside.

To estimate the expected improvement in trackerperformance as the estimate in target aspect angle im-proves, we simply substituted the true value of the as-pect angle for the estimated value; i.e., we assumedthere was no error in the estimation process. This ap-proach provides an upper bound on the gain in per-formance as related to errors in angle estimation. Fig-ure 24 shows that there are fewer red dots indicatingmisclassification for this case, resulting in an increasein the percentage of time the tracker correctly identi-fied the target as an M109 from 74% to 83%. Conse-quently, this change suggests that any improvementsto the tracker that result in a decrease in the error ofthe aspect-angle estimate, such as added road con-straints, can be expected to provide significant im-provement in identification performance.

We now demonstrate how CAT, using target iden-tification, can be used to improve tracker perfor-mance. We consider a scenario in which two vehiclesapproach an intersection from different roads. At theintersection, one vehicle (an M109) turns right andthe other (an M1) turns left, as illustrated on the leftin Figure 25. This results in a kinematically challeng-ing situation. With the usual motion models, the il-lustration on the right in Figure 25 is a more likelyhypothesis for this scenario, where each vehicle is as-sumed to continue along a straight path. Using kine-matic information alone, the tracker chose the correctpath 20% of the time; i.e., it determined that the ve-hicles had turned. In contrast, with CAT, the percent-age increased to 100%—a significant improvementin tracker performance.

The CAT performance results, shown in Figure

26, show tracker improvement and suggests a strongperformance dependency on a priori knowledge ofeach track’s vehicle identification. In particular, as theprior knowledge of the identity of the vehicle in-creases, the separation in the values of target identifi-cation χ2 corresponding to the correct and incorrectassociation increases, resulting in an increase in theprobability of correctly identifying the target. For ex-ample, as the value of the prior increases from 0.5 to0.9, the penalty for an incorrect association increasesfrom –5 to –3, while the benefit for a correct associa-tion increases from –7 to –8 (a lower score is better).

0 100 200 300 400 500 600 7000

0.5

1

Pos

teri

or

Scan number

Pcc = 0.83

Hypothesis 2Hypothesis 1

Truth

M109 M1 M109 M1

FIGURE 24. The accumulated posterior probability of classification for the M109, given perfect estimation of the re-port aspect angle, as a function of scan number. The blue dots indicate that the M109 was correctly classified withmaximum a posteriori probability. The red dots indicate that the M109 was classified as an other vehicle. On average,the probability of correctly classifying the M109 was 0.83.

FIGURE 25. Consider a tracking scenario in which two tar-gets—an M109 and an M1—moving on separate roads cometogether briefly at an intersection and then each turns anddiverges. The left image shows the true motion of the ve-hicles (hypothesis 1) while the right image shows a kinemati-cally possible alternative (hypothesis 2). If a linear motionmodel is used in the Kalman filter (constant velocity), we ex-pect the tracker to choose hypothesis 2, which would be anincorrect description of the tracking scenario. With classifi-cation-aided tracking (CAT), however, the true motionshown in hypothesis 1 is correctly tracked 100% of the time.

Page 25: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 299

That is, as prior knowledge increases, the penalty foran incorrect association also increases, resulting in abetter chance of correctly identifying the target. Inaddition, as the scan rate increases from ten secondsto five seconds, as expected, the probability of cor-rectly identifying the target increases, with a very pro-nounced benefit for cases when the prior knowledgeis large, such as seen when the prior equals 0.8.

Naturally, additional work is needed to quantifythe impact of CAT upon tracker performance morereliably, but these preliminary results suggest thatsuch additional work is warranted.

Two-Dimensional Feature-AidedTracking Variants

Thus far we have considered the HRR profile to bethe target feature, which is subsequently used for tar-get identification or to help resolve kinematic ambi-guities with the SAT and CAT feature-aided trackingalgorithms. Although the HRR profile is readily ob-tainable with a high-bandwidth waveform, it doescontain limited target feature information—a fewtens of pixels represent the entire target. In addition,the HRR profile is known to be very sensitive to anglevariation, thus significantly limiting the robustness of

HRR-based SAT algorithms. In contrast, a two-di-mensional image of the target contains significantlymore information about the target scattering. In ad-dition, HRR profiles are available when the target ismoving along a relatively straight path. When the tar-get stops or turns, we have the opportunity to collecttwo-dimensional signatures. We show how these SARand ISAR images may be used in a meaningful way toimprove tracker performance.

In this section we show how two-dimensional im-ages, in particular SAR and ISAR images, can be usedin a two-dimensional variant of the SAT algorithmpreviously discussed. Furthermore, we show how thetwo-dimensional images are significantly more robustto aspect-angle variability. In addition, we recall thatSAR and ISAR images can be used to generate HRRprofiles at angles significantly different from thosecontained in the image; i.e., a few two-dimensionalimages can be used to populate an HRR template da-tabase covering a significant portion of the full 0°-to-360° range of aspect angles. This latter capabilityleads to the possibility of using SAR images of oppor-tunity to construct an on-the-fly template set for atarget type lacking previously collected templates forclassification.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Priors

–10

–8

–6

–4

–2

0

2

4

6

8

10

2

Cor

rect

ass

ocia

tion

(per

cent

)

0

10

20

30

40

50

60

70

80

90

100

Correct association 2Incorrect association 2

5-sec update10-sec update

χ

χχ

FIGURE 26. Improvement in CAT performance as a function of prior knowledge. The targetidentification χ2 values for correct and incorrect association are shown as a function of the apriori value. As prior knowledge increases, the gap between the χ2 values increases, showingan improved discrimination between the correct and incorrect associations. As a result, theprobability of correct classification also increases as the prior knowledge increases.

Page 26: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

300 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

Two-Dimensional Signature-Aided Tracking

Consider a situation whereby a vehicle stops, startsmoving again, and then stops. For a high-valued tar-get, the operator could collect a SAR image of the tar-get each time the target stopped. The degree of simi-larity between the two SAR images can be used todetermine if the stopped vehicle is in fact the same ve-hicle in each image. The vehicles in the two imagesare, however, most likely at different aspect angles.Before a comparison of the SAR images can be made,the vehicles must be rotated to a common aspectangle, and before the vehicles can be rotated, an accu-rate estimate of the target aspect angle must be ob-tained. We have developed such a method but, for thesake of brevity, we do not include it in this discussion.Given the aligned, decluttered SAR images, the ap-propriately normalized two-dimensional signature χ2

value is calculated from the MSE score obtained bycomparing the two images. As with the one-dimen-sional version of SAT, where HRR profiles are com-pared, the signature χ2 value is added to the kinematicχ2 score to obtain a two-dimensional SAT score.

We anticipate that the MSE scores from alignedSAR images corresponding to the same vehicle (thematch case) will be lower than the MSE score ob-tained from different vehicles (the mismatch case).Figure 27 shows the scores for the match and mis-match cases. The match scores were generated bycomparing the SAR images of a Scud TEL at aspectangles separated by 30°, while the mismatch scoreswere obtained by comparing the SAR images of aScud TEL and an M60 tank. It is evident that there issignificant separation between the match and mis-match mean values, with minimal overlap in the dis-tributions of scores. The fact that the two images are30° apart is an indication that the SAR images aremore robust to aspect-angle variability when used forSAT. However, part of the separation in distributions,possibly most of it, is due to the fact that the ScudTEL and M60 tank have different lengths. These re-sults suggest that this two-dimensional SAT variantcan provide additional benefit to the tracker. Futurework will integrate this algorithm into the tracker,and we will perform further evaluation of the algo-rithm at that time.

We just described an algorithm that addressed thekinematic ambiguity that can result when a vehicleexecutes a “stop-move-stop” maneuver. In contrast,we now present a method for dealing with the so-called “move-turn-stop” scenario. Consider a vehiclethat travels down a road, makes a turn, and thenstops. If the operator has information about the roadnetwork, he can schedule an ISAR collection whenthe vehicle is anticipated to turn. If the operator istracking a high-valued vehicle, he can justify expend-ing the extra radar resources for a ISAR image sincethe two-dimensional image does contain more infor-mation about the target than an HRR profile. If thevehicle stops sometime thereafter, the operator cancollect an SAR image of the area where the vehiclewas last known to be located. The new variant of SATthat we are proposing compares the ISAR and SARimages in a manner similar to the procedure discussedfor comparing two SAR images. As was described forcomparing the SAR images, we must first align theISAR and SAR images to a common aspect angle.

0 40 60 80 100 120 140 1600

1

2

3

4

5

6

7

20

MSE

MatchMismatch

FIGURE 27. The match and mismatch probability densityfunctions resulting from comparing SAR images separatedby 30° in aspect angle. For the match case, we comparedSAR images of a Scud TEL separated by 30° in aspect angleand rotated so that the images were aligned in aspect angle.In the mismatch case, we compared SAR images of a ScudTEL against images of an M60 tank separated by 30° in as-pect angle and rotated so the images were aligned in aspectangle. The MSE scores for the match case are significantlylower than the mismatch case, as expected, with minimaloverlap in distribution.

Page 27: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 301

Once aligned, a simple MSE calculation in range andcross range is performed, the appropriately normal-ized signature χ2 value is calculated, and this value isadded to the kinematic χ2 value to obtain the totalSAT χ2 score. In Figure 28, we show the distributionof match and mismatch scores, where the matchscores are obtained by comparing the SAR and ISARimages of a Scud TEL, which before alignment areseparated by 30°, while the mismatch scores are ob-tained by comparing the SAR image of a Scud TELwith the ISAR image of a M60 tank, also initiallyseparated by 30°. The separation in distributions be-tween the match and mismatch scores provides fur-ther evidence that the target signatures present in thetwo-dimensional images are more robust to target as-pect-angle variability than the HRR profile.

The last variant of two-dimensional SAT wepresent involves the comparison of two ISAR images.For this case, the operator has collected ISAR imagesat two different times when the vehicle is turning.With proper alignment of the targets to a commonaspect angle, the MSE difference is calculated. In Fig-

ure 29, the distribution of match and mismatchscores is calculated, as was done for the previous casesin Figure 27 (SAR versus SAR) and Figure 28 (SARversus ISAR). We see reasonable separation betweenthe match and mismatch scores, thus providing thebasis for hope that, with two-dimensional SAT, thetracker can make beneficial use of the extra informa-tion present in the images.

Two-Dimensional Classification-Aided Tracking

CAT uses target identification to help the tracker re-solve kinematically ambiguous situations. We havepresented a method for obtaining an estimate of thetarget identification by using HRR profiles. In thissection, we show how target identification obtainedfrom two-dimensional images—SAR and ISAR—canalso be used. This estimate of target identification canbe used to obtain a χID

2 score that can be added to thekinematic score to obtain the CAT χ2 value.

When a high-valued target stops, it is possible tocollect a SAR image of the area where the vehicle waslast known to be moving. A chip of the target fromthe SAR image can be obtained by using fixed-target

7

6

5

4

3

2

1

050 60 70 80 90 100 110 120 130 140 150 160

MSE

MatchMismatch

MSE

0

3

4

5

6

7

8

9

2

1

0 20 40 60 80 100 120 140 160 180 200

MatchMismatch

FIGURE 28. The match and mismatch probability densityfunctions resulting from comparing SAR images of a ScudTEL against ISAR images of a Scud TEL and ISAR imagesof an M60 tank, respectively. The aspect-angle separationbetween the SAR and ISAR images was approximately 30°.For both match and mismatch cases, the SAR and ISAR im-ages were aligned in aspect angle through rotation, and thencompared. Moreover, the MSE computation required asearch in cross range for the appropriate resolution.

FIGURE 29. The match and mismatch probability densityfunctions as a result of comparing ISAR images of a ScudTEL against ISAR images of a Scud TEL and an M60 tank,respectively. The aspect-angle separation between ISARimages was approximately 30°. The MSE calculation be-tween a pair of ISAR images required a search for the ap-propriate cross-range resolution for both images.

Page 28: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

302 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

indication and extraction methods. Given the targetchip, the target identification can be estimated by us-ing a template-based matching scheme. On the basisof the Semi-Automated Image Intelligence Processing(SAIP) work of Novak [2], we can expect the estimateof the target identification obtained with the SAR tobe significantly better than that obtained with anHRR profile. For example, in Figure 30, we show theSAR image and HRR profile of the Scud TEL.Clearly, more information about the target structureis present in the two-dimensional SAR images than inthe one-dimensional HRR profiles. Furthermore,with the advances we have made in superresolutiontechniques—BHDI in particular—the estimate ofthe target identification is better than previously re-ported. In Figure 31, we show the gain in perfor-mance, using a SAIP ATR classifier, for a six-classclassifier, as measured by a ROC curve in BHDI ver-

sus baseline HDVI and conventional baseline FFTprocessing. The improvement in performance, espe-cially at the operationally significant lower Pfc values,say 0.1 to 0.01, is significant.

When a high-valued target turns, an operator maycollect an ISAR image. A template-based matchingscheme may also be used for ISAR images. In thiscase, however, we need to compare ISAR imagesagainst SAR images in the template database. Figure32 illustrates an example of this type of comparison.Previous work by Levin on template-based ISARATR [4], and our work on signature-aided tracking,summarized in Figure 28, showed that there are suffi-cient differences between Scud TEL and M60 tankISAR images that reasonable discrimination can beobtained. Our ongoing feature-aided tracking workincorporates the ISAR mode into the tracker to helpimprove both SAT and CAT.

Test

20 60 1208040 100

120

100

80

60

40

20

Ran

ge (p

ixel

)

Cross range (pixel)

20

120

100

80

60

40

20

60 1208040 100

Cross range (pixel)

SAR Template

4030 60 100807050 90

20

15

10

5

0

–5

–10

–15

Pow

er (d

B)

Range (pixel)10030 6050 70 8040 90

Range (pixel)

20

15

10

5

0

–5

–10

–15

FIGURE 30. The two upper images depict a SAR image of a Scud TEL and itsbest-matched SAR template, while the graphs below show HRR profiles corre-sponding to the test and template images. Visual inspection of these images andgraphs clearly signifies that the two-dimensional SAR mode provides signifi-cantly higher matching confidence than the one-dimensional HRR counterpart.

Page 29: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 303

Summary

Identification and tracking of moving targets arecoupled problems, consisting of identification tech-

niques that use successive target signatures and accu-mulate classification information in a tracker, andfeature-aided tracking techniques that take advantageof target signatures and accumulated classificationinformation to improve tracker performance. In thisarticle we consider the problems of improving mov-ing-target classification and track continuity by usingradar signatures of the targets, e.g., HRR, SAR, andISAR. Our feature-aided tracker incorporates theseradar signatures into a kinematic tracker algorithm[16] using two modes: classification and verification.

In the classification mode, a one-dimensional tem-plate-based automatic target recognizer provides tar-get identification based on the target’s HRR profile.For operational consideration, we used feature-en-hancement algorithms such as Benitz’s BHDI to im-prove the feature quality of the HRR profiles. Wedemonstrate a Bayesian classifier using the resultingsuper-resolved profiles to provide moving-targetidentification. By incorporating the classifier into thetraditional tracker, we then established a technique bywhich the HRR feature information can be fusedwith the kinematic information to improve track-to-report association and resolve kinematic ambiguities.To demonstrate the performance of the classification-aided mode, we simulated a kinematically confusingtwo-target scenario by emulating target MTI detec-

FIGURE 31. Performance results using the Semi-AutomatedImage Intelligence Processing (SAIP) two-dimensionalATR classifier for SAR images of the same six vehiclesdemonstrated in the HRR case (and shown in Figure 7).These results indicate that BHDI-processed SAR imagesprovide the best ATR performance, compared to baselineHDVI and Taylor-weighted FFT methods. More importantly,the two-dimensional classifier using SAR images providesmuch more reliable classification confidence than the one-dimensional classifier using HRR profiles.

FIGURE 32. Comparison of an ISAR image and a SAR template image of the same target, showing the degree ofsimilarity of the images. The ISAR image on the left shows a Scud TEL moving in a circular arc of 50-m radiusand turning at a rate of 9.78° per second. The ISAR image has a 1-ft range resolution and approximately 1-ftcross-range resolution. The SAR image on the right shows the exact same target, also at 1-ft by 1-ft resolution.

10–4 10–110–210–3 100

Probability of false classification

Pro

babi

lity

of c

orre

ct c

lass

ifica

tion

0.92

0.84

0.88

0.80

0.96

1.00

BHDI (1 ft)Baseline HDVI (1 ft)Baseline FFT (1 ft)

M1T72

M109M110BMP2M113

Targets

120

100

80

60

40

20

ISAR SAR Template

20 60 1008040 120

120

100

80

60

40

20

20 60 1008040 120

Ran

ge (p

ixel

)

Cross range (pixel) Cross range (pixel)

Page 30: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

304 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

tions augmented with HRR profiles by mergingnoise-injected GPS data with profiles generated fromSAR images taken from the MSTAR and Data Col-lection system (DCS) data sets. The CAT mode suc-cessfully resolved all tracks-reports misassociationsmade by the kinematic tracker. Furthermore, weshowed that the performance improvement providedby ATR can be quantified by a priori knowledge ofthe tracks’ vehicle types.

In the verification mode, track and report signa-tures are systematically compared, presuming thatthese signatures are similar if they come from thesame target. We demonstrated an approach to fusethe signature comparison results with the UAV’s ki-nematic information by using a log-likelihood ratio.Applying this approach to a simulated kinematicallyconfusing two-target scenario, we showed that thesignature-aided component provided modest im-provement to the traditional tracker due to HRR fea-ture variability on aspect angle.

To improve the performance of the SAT mode, weconsidered a supplemental approach to handle largeaspect differences by incorporating SAR and ISARimages into the tracker. We demonstrated metrics tocompare HRR, SAR, and ISAR signatures, and weextended the range of aspect-angle mismatch seen inthe SAT mode with only HRR profiles. When we en-countered track and report profiles whose aspectangles differed by more than 10°, we used a two-di-mensional image associated with the track to generatean HRR profile at the same aspect angle as that of thereport profile. To obtain SAR images while tracking,we considered a go-stop-go scenario in which the tar-gets stopped long enough for the radar to obtain aSAR image. To obtain ISAR images while tracking,we assumed the availability of a road network to pre-dict ISAR opportunities.

The classification and verification modes of thefeature-aided tracker can potentially operate simulta-neously to provide improved track continuity and tar-get identification.

Acknowledgments

The authors would like to thank Gerald Benitz of theSensor Exploitation group at Lincoln Laboratory forproviding us with his expertise on superresolution

techniques, as well as his HDVI and BHDI codes. Wewould also like to thank Keith Sisterson, also of theSensor Exploitation group, for providing us with theUAV tracker. This work was sponsored by DARPA.

R E F E R E N C E S1. M.I. Mirkin, B.E. Hodges, and J.C. Henry, “Moving Tar-

get Recognition,” presentation for DARPA ATR Exposition,17 Sept. 1998.

2. L.M. Novak, G.J. Owirka, W.S. Brower, and A.L. Weaver,“The Automatic Target-Recognition System in SAIP,” Linc.Lab. J. 10 (2), 1997, pp. 187–202.

3. G.R. Benitz, “High-Definition Vector Imaging,” Linc. Lab. J.10 (2), 1997, pp. 147–170.

4. R.L. Levin, private conversation.5. D.H. Nguyen, G.R. Benitz, J.H. Kay, and R.H. Whiting,

“Super-Resolution HRR ATR Performance with HDVI,”SPIE 4050, 2000, pp. 418–427.

6. D.H. Nguyen, G.R. Benitz, J.H. Kay, B.J. Orchard, and R.H.Whiting, “Super-Resolution High Range Resolution ATRwith HDVI,” IEEE Trans. Aerosp. Electron. Syst. 37 (4), 2001,pp. 1267–1286.

7. D.H. Nguyen, J.H. Kay, B.J. Orchard, and R.H. Whiting,“Improving HRR ATR Performance at Low SNR by Multi-Look Adaptive Weighting,” SPIE 4379, 2001, pp. 216–228.

8. D.H. Nguyen, G.R. Benitz, J.H. Kay, B.J. Orchard, and R.H.Whiting, “HRR ATR Performance Enhancement at Low SNRUsing Beamspace HDI,” SPIE 4379, 2001, pp. 266–276.

9. G.J. Owirka, S.M. Verbout, and L.M. Novak, “Template-Based SAR ATR Performance Using Different Image En-hancement Techniques,” SPIE 3721, 1999, pp. 302–319.

10. D.H. Johnson, “The Application of Spectral EstimationMethods to Bearing Estimation Problems,” Proc. IEEE 70 (9),1982, pp. 1018–1028.

11. H.C. Stankwitz, R.J. Dallaire, and J.R. Fienup, “Non-LinearApodization for Sidelobe Control in SAR Imagery,” IEEETrans. Aerosp. Electron. Syst. 3 (1), 1995, pp. 267–279.

12. H.C. Stankwitz and M.R. Kosek, “Super-Resolution for SAR/ISAR RCS Measurement Using Spatially Variant Apodization(Super SVA),” Proc. AMTA Symp., Williamsburg, Va., 13–7Nov. 1995, pp. 251–256.

13. J. Capon, “High-Resolution Frequency-Wavenumber Spec-trum Analysis,” Proc. IEEE 57 (8), 1969, pp. 1408–1418.

14. G.R. Benitz, “Beamspace High-Definition Imaging (B-HDI)SAR Image Enhancement,” submitted to IEEE Trans. ImageProcess.

15. J.H. Kay and R. Levin, internal presentation.16. L.K. Sisterson, private conversation.17. D.H. Nguyen, J.H. Kay, B.J. Orchard, and R.H. Whiting,

“Classification-Aided Tracking,” Proc. 55th ATRWG Scienceand Technology Symp., Aerospace Corp., Chantilly, Va., 6–18 July2001.

18. D.H. Nguyen, J.H. Kay, B.J. Orchard, and R.H. Whiting,“Feature-Aided Tracking of Ground Moving Vehicles,” SPIE18, 2002, pp. 234–245.

Page 31: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 305

A P P E N D I X :B E A M S P A C E H I G H - D E F I N I T I O N I M A G I N G

S U P E R R E S O L U T I O N M E T H O D

of the newsuperresolution method called beamspace high-defi-nition imaging (BHDI), as applied to synthetic aper-ture radar (SAR) images and high range resolution(HRR) profiles. This method was originally devel-oped by Gerald R. Benitz of Lincoln Laboratory forforming SAR images. Since BHDI is related to high-definition vector imaging (HDVI), we first discussHDVI, and how it is related to J. Capon’s method,and then we show how HDVI is extended to BHDI.

Recently Benitz proposed a modification to HDVIthat reduces the computational cost of BHDI to one-sixth the amount needed for HDVI. In particular,HDVI required 8000 operations per pixel. WithBHDI, only 8000/6 operations are needed. BHDIprovides speckle noise suppression that is comparableto HDVI but improves and better preserves targetedges. In Figure 1 in the article we compare a BHDI-processed SAR image of a Scud TEL with an imageformed with the conventional Taylor-weighted FFT(baseline FFT) method. The inherent ability ofBHDI to control the main lobe and the sidelobes isclearly visible as evidenced by the significant reduc-tion in speckle noise in the BHDI image compared tothe weighted FFT.

With this success with BHDI in forming SAR im-ages, we adapted the method to the one-dimensionalcase for forming HRR profiles. The superresolutionof HRR data can be viewed as a parameter estimationproblem in which we seek to estimate the reflectivityintensity at each pixel by selecting an appropriate setof weighting coefficients. The basic one-dimensionalspectral estimation problem can be posed in the formof Capon’s maximum likelihood method (MLM), bywhich it is desired to estimate the power spectrum ρ(⋅)such that

ρ( ) min

,

x H= E

= min ,

w

w

w r

Rw w

2

[ ]

subject to the constraint

v w, 1 ,[ ] =

where w ≡ w(x) is a weight vector, r ≡ r(x) is an HRRprofile, v ≡ v(x) is an ideal point scatterer response,and R ≡ R(x) is a covariance matrix at pixel location x.We used E{ ⋅ } to denote the expectation, ⋅ to de-note the Euclidean L2-norm, and [⋅,⋅] to denote thevector inner-product operator. For an invertible cova-riance matrix R, the spectral estimate ρ(x) is given bythe well-known result

ρ( ) =,

x11R v v−[ ]

.

Since R is unknown, it must be estimated beforeCapon’s technique can be applied. Several strategiesexist for estimating a covariance matrix, includingsubband averaging, forward-backward subband aver-aging, and block-Toeplitz methods. In our work, weemployed the forward-backward subband averagingtechnique in which each HRR is sectioned into over-lapping subsets whose correlations are subsequentlyaveraged to obtain an estimated covariance matrix R̂ .Let H = {r1, r2, ..., rM }, where ri ∈ RN(i = 1, ..., M ),denotes a set whose elements represent N-sampleHRR profiles. If q subbands from each profile areused to estimate the covariance matrix, then R̂ isgiven by

ˆ ˜ ˜R R JRJ=12

+ ,( )

Page 32: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

306 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

where

˜ ,R r r=1

=Mq i i

i

Mq

[ ]∑1

and

J =

×

0 0 0

0 0 1 0

0

0 1 0

1 0 0 01 1

L

L

M M O M

L M

L

1

– + – +( ) ( )

.

N q N q

When the estimated covariance matrix is singular,HDVI introduced a twofold modification to Capon’s

technique to accommodate a noninvertible R̂ . Thefirst modification, called the quadratic constraint, lim-its the norm of the weight vector from exceedingsome threshold. The second modification, called thesubspace constraint, constrains the weight vector to thesubspace spanned by R̂ .

The basic idea of HDVI is to solve the constrainedoptimization problem subject further to

w w– proj2≤ β

and

v w vproj proj2

, =[ ] ,

where w ∈ span{ R̂ }. Here, vproj denotes the projec-

FIGURE A. Comparison of HRR profiles of a T72 tank formed by using the weighted FFT, HDVI, and BHDI algo-rithms. The graph at the bottom left shows an HRR profile at 0.5-m range resolution and 20-dB SNR, while the otherthree graphs show the same data at 1-m range resolution. The HDVI-processed profile at 1-m range resolution clearlyexhibits scattering responses resembling those of the 0.5-m range-resolution profile processed with the weightedFFT. However, HDVI failed to preserve both of the scattering responses at the edges of the T72. BHDI, on the otherhand, successfully preserved all the scattering responses seen in the HDVI-processed profile as well as both of theresponses at the T72’s edges.

T72

0 10 40 60 80

–15

–10

–5

0

5Baseline weighted FFT

Range (pixels)

Pow

er (d

B)

0.5-m resolution

0 10 20 30 40

–15

–10

–5

0

5Baseline weighted FFT

Range (pixels)

1-m resolution

0 10 20 30 40

–15

–10

–5

0

5HDVI

Range (pixels)

1-m resolution

0 10 20 30 40

–15

–10

–5

0

5BHDI

Range (pixels)

Pow

er (d

B)

1-m resolution

Pow

er (d

B)

Pow

er (d

B)

Page 33: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

VOLUME 13, NUMBER 2, 2002 LINCOLN LABORATORY JOURNAL 307

tion of v onto the column space of R̂ , and β is a posi-tive real number used to limit how far away w can de-viate from vproj.

BHDI employs a different approach to estimatethe radar cross section at each pixel of interest. Thebasic assumption in BHDI is that most of the energycontamination to the pixel of interest comes mainlyfrom the adjacent pixels. To estimate the radar crosssection at each pixel of interest, we need only to addand subtract the contributions from the adjacent pix-els. Let rij denote the jth pixel of the ith profile ri, anddefine a vector zij as follows:

zr

r rijij

i j i j=

+( - ) ( + )1 1

.

The estimated covariance matrices of the real andimaginary components at the jth pixel of the ith pro-file hence depend only on the adjacent pixels (j – 1)and (j + 1):

ˆ

ˆ

R z z

R z z

ij ij ij

ij ij ij

H

H

re re re

im im im

=

=

( matrices)( )( )

×2 2

The estimated power spectrum ρ( j ) at pixel j isthus given by the equality

ρ( ) min ˆ ˆ

ˆ

,jMj

jH

ij iji

M

j

j= 1 re im

ww R R

R

w+( )

=∑

11 2444 3444

where

w j =1

ζ

and ζ ∈ R , for j = 1, ..., N, where N is the number ofrange pixels in each profile. We note that as long asM > 1, the estimated covariance matrix at pixel j, de-noted R̂ j , will be nonsingular. As an effect, the sub-space constraint developed originally for HDVI is nolonger required.

We now discuss a sample result using BHDI. Fig-ure A shows sample HRR profiles of a T72 tank at20-dB SNR and 1-m and 0.5-m range resolutions.The baseline-FFT–processed profile at 1-m resolu-tion (upper left in the figure) exhibits two broad scat-terer responses, whereas the baseline HDVI–pro-cessed profile (upper right) reveals two additionalscatterer responses. When we compare the 0.5-mrange-resolution profile processed with the baselineFFT (lower left), we see that the additional scattererresponses shown in the baseline HDVI-processedprofile also appear at the higher resolution. However,the baseline HDVI profile failed to detect the scat-terer response that appears along the left edge of the0.5-m resolution baseline FFT profile. The applica-tion of BHDI (lower right) successfully picked up thescatterer responses at both edges of the target. Sincethe baseline HDVI algorithm attempts to estimatethe radar cross section at each pixel of interest by add-ing and subtracting out contributions from everyother pixel, the algorithm may be treating noisy infor-mation along the edges as relevant information.BHDI, however, uses only the adjacent pixels and isthus less prone to contributions from noisy scattererresponses about the edges. As an effect, BHDI tendsto preserve the scatterer responses toward the edges ofthe target, while HDVI is inclined to null them out.

Page 34: Classification and Tracking of Moving Ground Vehicles

• NGUYEN, KAY, ORCHARD, AND WHITINGClassification and Tracking of Moving Ground Vehicles

308 LINCOLN LABORATORY JOURNAL VOLUME 13, NUMBER 2, 2002

. received a B.S. degree fromPurdue university in 1974 anda Ph.D. degree from BrandeisUniversity in 1978, both inphysics. He has worked atLincoln Laboratory since1978, where he is a senior staffmember, currently working inthe Signals and Systems group.His research interests includeautomatic target recognition asit applies to both moving andstationary targets.

. is a staff member in the Signalsand Systems group. He re-ceived a B.S. degree fromLoyola University at Chicago,an M.S. degree from theUniversity of Illinois at Chi-cago, and a Ph.D. degree fromRensselaer Polytechnic Insti-tute, all in mathematics. Inaddition, he has undergraduateand graduate degrees in chem-istry and polymer science.Prior to joining Lincoln Labo-ratory, he worked at TASC inReading, Massachusetts, andat the U.S. Naval ResearchLaboratory in Washington,D.C. He was also an adjunctassistant professor of math-ematics and computer scienceat Merrimack College andBentley College. He has beeninvolved in research in a vari-ety of fields, including solidstate physics, underwateracoustics, parallel processing,signal processing, multisensordata fusion, and moving-targetdetection, tracking, classifica-tion, and imaging.

. was born in Saigon, Vietnam,in 1971. He completed hiscourse work for the mathemat-ics degree from the Universityof California, San Diego, in1989, received a B.S. degree inmathematics and engineeringfrom Harvey Mudd College,Claremont, California, in1993, and M.S. and Ph.D.degrees in electrical engineer-ing from the University ofCalifornia, Los Angeles, in1995 and 1998, respectively.From 1994 to 1997 he was amember of the technical staffin the control analysis group atthe Jet Propulsion Laboratory.From 1997 to 1998 he workedas a systems engineer in theTRW Avionics Systems Divi-sion in San Diego, California.In 1999, Dr. Nguyen joinedLincoln Laboratory as a mem-ber of the technical staff in theSignals and Systems group.His current interests includefeature-aided tracking, auto-mated target recognition,superresolution, complexsystems, and self-organizingcontrol.

. received B.S., M.S., and Ph.D.degrees in electrical engineer-ing at the University of Idaho,Stanford University, and theUniversity of Idaho, respec-tively. His thesis work was onthe study of optimizationtechniques for the control ofsequential hydroelectric plantsto maximize power outputwith environmental planningconstraints. He joined LincolnLaboratory in 1973, and he iscurrently group leader with theSignals and System group. Hisareas of research include radartechnology, application ofestimation and optimizationtheory, remote sensing andtarget recognition techniques,and image processing. He is amember of the IEEE.