typing instructions for fusion 2001 documentscscott/pubs/fusion2003-waagen.doc · web viewwavelet...

12
Exploring alternative wavelet base selection techniques with application to high resolution radar classification Donald E. Waagen ATR Design Center Raytheon Company Tucson, AZ, U.S.A. [email protected] Mary L. Cassabaum ATR Design Center Raytheon Company Tucson, AZ, U.S.A. [email protected] Clayton Scott Electrical and Computer Engineering Rice University Houston, TX, U.S.A. [email protected] Harry A. Schmitt Cognitive Systems Raytheon Company Tucson, AZ, U.S.A. [email protected]

Upload: others

Post on 13-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

Exploring alternative wavelet base selection techniques with application to high resolution radar classification

Donald E. WaagenATR Design CenterRaytheon CompanyTucson, AZ, U.S.A.

[email protected]

Mary L. CassabaumATR Design CenterRaytheon CompanyTucson, AZ, U.S.A.

[email protected]

Clayton ScottElectrical and Computer Engineering

Rice UniversityHouston, TX, U.S.A.

[email protected]

Harry A. SchmittCognitive SystemsRaytheon CompanyTucson, AZ, U.S.A.

[email protected]

Page 2: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

Abstract – Coifman, Wickerhauser, and Saito developed the concepts of the wavelet ‘best-basis’ algorithm and the local discriminant bases (LDB) algorithm for signal and image characterization and classification. LDB was originally based on the differentiation of class-specific time-frequency energy distributions, using an L1 norm for multi-class feature weighting. Recent extensions to the approach for discrimination estimate the Kullback-Leibler distance between empirically estimated class-conditional probability densities. This paper offers a complimentary approach for wavelet base/feature selection via the Kolmogorov-Smirnov test and algorithmic extensions for wavelet feature selection in a multi-class environment. Alternative feature score normalizations are investigated. Additionally, this research develops a dynamic re-weighting scheme for feature selection. The goal of the dynamic feature re-weighting process is similar in spirit to ‘boosting’, as it attempts to bias the base selection process to select features which offer more discrimination between currently ‘costly’ or ‘tough’ classes of interest. We investigate the efficacy of the algorithms in a multi-class discrimination setting using simulated high-resolution multi-polarimetric millimeter-wave real-beam radar signatures of ground vehicles.

Keywords: Automatic target recognition, wavelet packets, Kolmogorov-Smirnov test, feature selection, millimeter-wave radar.

1 Introduction Let be a random vector (signal or image) with an associated class label . Given a set of training pairs from multiple target classes, the goal in classifier design is to determine a partitioning which correctly maps input vectors and associated labels and minimizes the classification error in a performance environment. For signals and images that are localized in extent (i.e. non-periodic), decomposition of the signal into an orthonormal library of time-frequency bases that are spatiotemporally localized (for instance, wavelets) has demonstrable benefits [1]. Besides spanning the original space, wavelet orthonormal bases tend to realign and concentrate signal energy, thereby allowing signal characterization and representations in a reduced feature space [2]. A novel approach for orthonormal base selection in a classification setting was developed by Coifman and Saito [3][4], which they named ‘local discriminant bases’ or LDB. An overview of the original LDB technique plus recent modifications is discussed in the next section. In this paper, we introduce an alternative fitness function (a standard statistical test) to measure the distributional differences between class-conditional

probability densities and the use of alternative norms in determining base (feature) efficacy in a multi-class setting. Additionally, a dynamic approach to re-weighting fitness scores conditionally based on the efficacy of previously selected bases is developed and demonstrated in a multi-class radar discrimination problem.

2 Local Discriminant Bases Given a pre-selected library of orthonormal bases (wavelet packets, local cosine, etc.), the original LDB [4] consists of the following steps:

1: Given a training set of m classes, expand training signals of length n onto the library of redundant orthonormal bases (indexed by scale, frequency, and time/location). Compute the time-frequency map of class j. denoted as , and specified by

(1)

where is the number of training samples of class j.

2: Compute an overall discriminant score for each subspace, by the following:

(2)

with , where S is the maximum scale (depth) of the library tree, and the composite score is a summation of the pairwise comparison scores,

(3)

is a discriminant function specified on the energy maps. Typical definitions for D include relative entropy

, (4)

or Fisher’s measure of class separation. See [4] for details.

Page 3: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

3: Determine the best basis (spanning the space) by comparing the with

( ‘s subspace tree descendants). By starting at the highest scale (tree nodes) and moving to the lowest (tree roots), select the subspace representatives (best-basis) with the maximum overall score.

4: Order the basis functions of the selected ‘best-basis’ by their classification efficacy. Select the best bases as features to provide to a classifier.

Unfortunately, the use of the ‘energy map’ in step 1 replaces the distributional characteristics of the training set with an estimate of the overall average power projected by the training data onto the bases. Energy map comparisons between classes therefore consist of comparison of the differences in average power (a comparison between means). Higher moments are unavailable for exploitation, and furthermore the mean can be susceptible to training outliers. A preferred approach will exploit the distributional characteristics of the individual sample projections, i.e.

. Indeed, recent work by Saito [5] has abandoned the energy map, and computes the relative efficacy of bases for classification via empirical average shifted histograms (ASH) [6] density estimates and computation of the Kullback-Liebler [7] divergence measure:

(5)

3 Kolmogorov-Smirnov Best Bases This section will introduce the algorithmic changes to LDB investigated in our research. First, we define an alternative discriminant function for measuring class-pairwise base classification efficacy. Second, we identify the alternative multi-class normalizations examined in our effort, and finally this research will introduce a scheme for base (feature) selection that reevaluates the efficacy of wavelet bases at each iteration of the selection process. Desirable characteristics for any discriminant function are to minimize the required assumptions for its use while maximizing the power (efficacy) of the test. For instance, correct use of Fisher’s discriminant function requires the assumption of unimodal class densities. The two-sample Kolmogorov-Smirnov test provides a distribution (model) free approach to measure the discrepancy between two samples.

3.1 Kolmogorov-Smirnov test statistic The Kolmogorov-Smirnov test [8] is a nonparametric statistical test with a two-sample variant that attempts to quantify whether two samples and

were produced from the same underlying probability density . The empirical distribution functions (EDF) of the two samples,

and , are the proportion of samples and are computed from their respective samples via

(6)

The Kolmogorov-Smirnov (or K-S) test statistic D is just the largest deviation between and ,

(7)

This measure of separation is robust to outliers and makes no assumptions concerning the form of the underlying density function. Incorporating the K-S test as our discriminant function for estimation of pairwise class separation, we replace step one in the original LDB with the following:

1: Given a training set of m classes, project training signals onto the library of redundant orthonormal bases:

(8)

1a: Compute the K-S test statistic

across all base indexes (s,f,t). Form the element vector of class pair

discriminant measures for each base:

(9)

Page 4: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

This approach requires more memory than the original approach, as a wavelet packet decomposition is stored for each signal. The energy map approach of LDB allowed cumulative summations of samples to a single (mean) representation, hence requiring one wavelet packet decomposition per class. Given the array of class-pair discriminant scores, we need to provide an overall score for the wavelet base. This requirement is discussed in the next section.

3.2 Alternative normalizations of At this stage we have an array of m(m-1)/2 class-pair discriminant statistics, and we have some liberty in determining a mapping from the pairwise scores to an overall score. A natural general formulation consists of the p-norm, given by

(10)

Popular values for p include 1, 2, and (sup norm), which is given as

(11)

This paper will investigate the use of these p-norms and and also a minimax (maxi-min) normalization given by

. Although choosing the minimum

might seem counterintuitive, it allows the user to select the bases with the ‘best-worst-case’ efficacy, and therefore merits investigation. In the p-norm context, the approach used by LDB (equation (3)) is equivalent to using an L1 (p=1) norm for .

3.3 Dynamic re-weighting of feature utility In a two-class setting, one would like to select those bases with maximum discriminatory capability. In a multi-class environment, the approach is not necessarily as straightforward. For instance, if a particular selected wavelet base perfectly separates two classes of m, would additional features that separate the same classes be desirable to keep? Should not the features to be selected be ‘biased’ toward the classes that have yet to be separated? To address this issue, we wish to modify the feature selection process such that selections are ‘biased’ toward features that discriminate between classes which are not currently well separated by the previous selections. Our approach defines an element vector , whose elements at every iteration K of the feature selection process represent a relative pairwise ‘cost of misclassification’ or ‘current discrimination benefit’

between the associated classes. The new ‘dynamically re-weighted’ discriminant vector is defined as

(12)

A user-specified normalization, as discussed in the previous section, is applied to , thereby computing

the overall utility of the respective base . After the base is selected, the weights are updated according to the following equation (inspired from the technique of Boosting [9]):

(13)

where is the linear sum (L1 norm) of the unweighted discriminant scores associated with the currently selected wavelet base . Weights with discriminant score values greater than selected base’s discriminant average will be biased downward, while weights with pairwise discrimination less than the selected base’s average will be increased. The resulting new weights or ‘costs of misclassification’ are then re-normalized to sum to one. The effect of these adjustments therefore increases the likelihood that future selected features will address class pairings that were not as well separated by the previous selected bases. This ‘dynamic re-weighting’ approach for wavelet feature selection replaces original LDB step four, and is summarized as:

4: Initialize

For k = 1 to {number of desired features}

For all bases Selection Processa) Compute (eq. 12) for all bases

b) Given specified norm, compute

c) Select base with largest Update Processd) Compute using eq. (13)

e) Normalize

End forEnd for

Page 5: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

The update formulation (13) is similar in spirit and functional form to the re-sampling rule in ‘boosting’, as it attempts to bias the base selection process to select features that offer more discrimination between currently ‘costly’ or ‘tough’ classes of interest. It is important to note that the concepts of alternative normalizations and/or dynamic re-weighting introduced in these sections are applicable to any multi-class base/feature ‘selection’ approach, including LDB. In the following sections, we will compare the original LDB with the K-S best basis. We also compare the original LDB with a ‘modified’ LDB, augmenting LDB with alternative p-norms and dynamic ‘cost’ re-weighting. These algorithms are evaluated using simulated data from a signal-processing domain of considerable interest to the authors, and the results are discussed in the following sections.

4 Measuring algorithmic efficacy Given the options discussed in the previous section, a study was performed to quantify the value of the various approaches (e.g. original LDB vs. K-S selected bases, multi-class normalizations, cost-based re-weighting of bases). To compare the algorithms, a data environment of current interest (multi-polarimetric real-beam radar signatures of ground vehicles, discussed in the following section) was selected for algorithmic analysis. The criterion used in this research for comparative analysis is the probability of correct classification of test data. This criterion requires a classifier to be trained on the wavelet bases/features selected via the algorithm under analysis. The classifier selected for our comparative analysis is a support vector machine (SVM), a general non-linear classifier which projects the features into an alternative space (via a Kernel function) and finds the ‘optimal’ linear decision boundary in the alternative space. See [10][11] for more details on support vector machines.

5 Application to Radar Classification To evaluate the relative efficacy of the various strategies and criteria for wavelet base selection, a trade study was performed using simulated high-resolution radar (HRR) data. Previous publications of automatic target recognition algorithms in the HRR environment abound [12][13][14]. This section presents a brief description of the problem (HRR) domain and our specific problem and data used for algorithm analysis.

5.1 Millimeter wave radar signatures For this study, simulated fully polarimetric millimeter wave (MMW) inverse synthetic aperture radar (ISAR) images were generated for five classes of ground vehicles. The range resolution of this data was on the order of six inches. The images were converted from 2D images to 1D real-beam range profiles by means of

frequency domain processing. Range profiles are 1xn complex representations of the processed signal returns vs. range (sensor to target range), where n is the number of range bins processed. Example magnitudes of range profiles for two vehicles are shown in Figure 1 below. It is the differences in these profiles, at all sensor-target orientations, that we seek to characterize via wavelet representations and exploit for classification purposes.

Figure 1. MMW range profile of two target classes.

The training data consists of 360 (one signal per degree pose) dual-polarimetric (left-circular and right-circular) range profiles for each vehicle of interest. Test data consists of insertion of the ISAR ‘chips’ into complex SAR images and converting the composite image into a real-beam range profile representation. An example of the training and corresponding test range profiles at the same pose (pose is the relative orientation of sensor and target vehicle) is shown in Figure 2. Our test set consists of 715 images with targets placed in random locations and random pose (relative to sensor). These images were converted to realbeam signatures. Detection of the target was pre-supposed, as this study was geared toward the relative analysis of algorithmic performance.

Figure 2. Corresponding training and test range profiles.

In this research, the complex range profiles (both training and test) are converted to real-valued magnitude range profiles for wavelet processing and analysis. Wavelet analysis is performed in both polarizations, and the best features (as defined by a criterion under study) are selected as input to an SVM classifier for training and testing. Target pose information is not given either in the training or testing phases, as the features extracted are representative of differences across vehicle angular aspects. Five vehicles of similar size were chosen to test the algorithms in a realistic multi-class environment. Our

0 20 40 60 80 100 120-35

-30

-25

-20

-15

-10

-5

0Test Signature

Range Bin

Pow

er in

dB

0 20 40 60 80 100 120-35

-30

-25

-20

-15

-10

-5

0Training Signature

Range Bin

Pow

er in

dB

Page 6: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

experimental discrimination capabilities and classification results are discussed in the next section.

5.2 Algorithm Trade Study Space The dimensionality of the potential trade space is quite high (i.e. wavelet library X discriminant function X p-norm X unweighted/ dynamic re-weighting) and it is beyond the scope of the paper to provide an exhaustive evaluation. However we will summarize our results to date.

5.2.1 K-S Best Bases vs. LDB The classification efficacy of the Kolmogorov-Smirnov test is compared to the energy map approach of LDB for two wavelet libraries (Daubechies-4 and Daubechies-8). The classifier results for K-S Best Bases and LDB are shown in Figures 3 and 4 for a small number of features. The norms displayed for K-S Best Bases are the L 1 and L2

norms, and illustrate little/no difference for these families between the two normalizations. The results also indicate that a difference exists between LDB and K-S approaches when the number of features is less than 10. However, with a higher number of features, LDB selected features outperform those selected via K-S Best Bases, illustrated in Figure 5. On an interesting note, all algorithmic variants for feature selection converge to identical classification results when a complete wavelet basis (spanning the original signal space) is supplied to the classifier. The complete basis classification results (PCC = 0.95) are independent of the wavelet library selected (Daubechies, Coiflets, etc.), ‘how’ the basis was selected (i.e. the discriminant function), the p-norm, or whether or not dynamic weights are applied. The complete-basis results were only a function of the SVM classifier hyperparameters, but further discussion is beyond the scope of this paper.

Figure 3. Number of features vs. 5-class identification performance for ‘best’ 4 to16 Daubechies (4) features.

Figure 4. Number of features vs. 5-class identification performance for ‘best’ 4 to16 Daubechies (8) features.

Figure 5. Number of features vs. 5-class identification performance for ‘best’ 16-256 Daubechies (4) features.

5.2.2 Normalization of Several experiments were performed to determine the classification efficacy of the various multiclass vector normalizations (section 3.2) on the problem under study. Other measures (e.g., number of matching features) could be applied to quantify feature selection overlap, but since our focus is on classification efficacy, the overall classification continued to be our metric of choice. Figure 6 displays classification performance for the K-S discriminant function and the Daubechies (8) library using L1, L2, , and min() functions applied to for the best n features, with n varying from 4 to 256. Figure 7 displays the same information for the Daubechies (4) library.

4 6 8 10 12 14 16

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub8 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

LDB K-S (L1) K-S (L2)

4 6 8 10 12 14 16

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub4 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

LDB K-S (L1) K-S (L2)

16 24 32 48 64 128 256

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub4 Library

Number of Bases%

Cor

rect

Cla

ssifi

catio

n (P

CC

)

LDB K-S (L1) K-S (L2)

4 16 32 64 256

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub8 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

L1 L2 min sup

Page 7: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

Figure 6. Normalization effects on Duabechies (8) / K-S classification percentages for 4-256 bases.

Figure 7. Normalization effects on Daubechies (4) / K-S Classification percentages for 4-256 bases.

As illustrated by the figures above, we discovered no particular normalization approach was superior across the wavelet libraries or the number of bases selected. However, the choice of normalization approach did effect the performance of dynamic re-weighting of misclassification, illustrated in the next section.

5.2.3 Uniform vs. Dynamic Re-weighting Several effects became apparent while quantifying the applicability of dynamic re-weighting of pair-wise class misclassification costs. It was noted that dynamic re-weighting of pairwise ‘utility’ produces negligible effects when an L1 norm is applied to , regardless of the discriminant function or wavelet family applied. This is illustrated for the Daubechies (4) library in Figure 8 given a K-S discriminant function and the L1 norm. Similar results were obtained when LDB was modified with dynamic re-weighting, with no significant differences were detected across wavelet libraries. However, using or min() normalization, differences in behavior were detected. Figure 9 illustrates a small but consistent improvement in classification efficacy via dynamic adjustment of .

Figure 8. Dynamic re-weighting of pair-wise costs produces minimal results with L1 normalization.

Figure 9. Classification with/without dynamic re-weighting: K-S discriminant and min() norm.

5.2.4 Modifying LDB As mentioned in a previous section, the algorithmic modifications developed in this research are not specific to the K-S best basis discriminant function, but are broadly applicable to any multi-class feature evaluation and selection technique. It was therefore of interest to see what are the effects of augmenting LDB with alternative normalization and/or dynamic re-weighting. Adding dynamic re-weighting of pair-wise costs into LDB (which implicitly uses an L1 norm) resulted in minimal changes alone. For small numbers of features, dynamic re-weighting and alternative norms in LDB provided improved results for some wavelet families, and insignificant or poorer performance in others. The improvement is illustrated in Figure 10, while Figure 11 illustrates less significant dynamic re-weighting results. But again, the classification differences between the various techniques become negligible as the number of features increase.

4 16 32 64 128 256

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub4 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

L1 L2 min sup

4 16 32 64 128 256

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub8 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

Reweighted K-S (min) K-S (min)

4 16 32 64 128 256

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub4 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

Reweighted K-S (L1) K-S (L1)

4 6 8 10 12 14 16

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub8 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C)

Reweighted LDB (min) LDB (L1)

Page 8: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

Figure 10. Reweighted/min norm LDB vs. original LDB for Daubechies(8) library.

Figure 11. Reweighted/min norm LDB vs. original LDB for Daubechies(4) library.

6 Conclusions This research has investigated three distinct modifications (discriminant function, multi-class discriminant normalization, and dynamic re-weighting of pairwise misclassification costs) to the original LDB library-base selection process, while attempting to quantify the classification efficacy of these approaches for the identification of vehicles from high-resolution radar signatures. It is evident that the effects of our modifications become less significant as the number of features increase, with all techniques converging to identical classification results when providing a complete basis of wavelet features to the support vector machine. Whether this is due to the robustness of the wavelet representations across families and techniques or due to the capabilities of the support vector machine classifier is a topic for investigation. It is important to note that similar results (convergence of classification efficacy as the number of wavelet bases increase) was reported by Saito et. al. [5] when comparing LDB with the Kullback-Liebler based discriminant function (eq. 5). Although the overall classification efficacy was not the focus of the research, the classification results show demonstrable promise for classification of ground vehicles using wavelets libraries for feature extraction and characterization of high-resolution radar signatures.

7 References[1] Ronald. R. Coifman, M. V. Wiskerhauser, Entropy-based algorithms for best basis selection, IEEE Trans. Info. Theory, Vol. 38 no. 2, pp. 713-718, 1992

[2] Jonathan Buckheit, David Donoho, Improved Linear Disrimination Using Time Frequency Dictionaries, Technical report, Department of Statistics, Stanford University, http://www-stat.stanford.edu/~donoho/Reports

[3] Naoki Saito, Ronald R. Coifman, Improved discriminant bases using empirical probability density estimation, 1996 Proc. Computing Section of Amer. Statist. Assoc., pp.312-321, 1997

[4] Naoki Saito, Ronald R. Coifman, Local discriminant bases, Mathematical Imaging: Wavelet Applications n Signal and Image Processing, A. F. Laine, M. A. Unser, Editors, Proc. SPIE, Vol. 2303, 1994.

[5] Naoki Saito, Ronald R. Coifman, Frank B. Geshwind, Fred Warner, Discriminant feature extraction using empirical probability density estimation and a local basis library, Pattern Recognition, Vol. 35, pp. 2841-2852, 2002.

[6] David W. Scott, Multivariate Density Estimation, John Wiley & Sons, New York, 1992.

[7] S. Kullback, R. A. Liebler, On Information and Sufficiency, Annals of Mathematical Statistics, Vol. 22, pp. 79-86, 1951.

[8] Jerold H. Zar, Biostatistical Analysis, Prentice Hall, New Jersey, 1984.

[9] Yoav Freund, Robert E. Schapire, A Short Introduction to Boosting, Journal of Japanese Society for Artificial Intelligence, Vol. 14, no.5, pp. 771-780, 1999.

[10] Vladimir N. Vapnik, An Overview of Statistical Learning Theory, IEEE Trans. On Neural Networks, Vol. 10, no. 5, pp. 988-999, 1999.

[11] Nello Cristianini, John Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge University Press, Cambridge, 2000.

[12] Dale E. Nelson, Janusz A. Starzyk, High Range Resolution Radar Signal Classification a Partioned Rough Set Approach, Proc. of the 33rd IEEE Southeastern Symposium on System Theory, pp. 21-24, 2001.

4 6 8 10 12 14 16

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1daub4 Library

Number of Bases

% C

orre

ct C

lass

ifica

tion

(PC

C) Reweighted LDB (min)

LDB (L1)

Page 9: Typing Instructions For FUSION 2001 Documentscscott/pubs/fusion2003-waagen.doc · Web viewWavelet analysis is performed in both polarizations, and the best features (as defined by

[13] Steven P. Jacobs, Joseph A. O’Sullivan, Automatic Target Recognition Using Sequences of High Resolution Range-Profiles, IEEE Trans. on Aerospace and Electronic Systems, Vol. 36, no. 2, pp. 364-381, 2000.

[14] Rob Williams, John Westerkamp, Dave Gross, Adrian Palomino, Automatic Target Recognition of Time Critical Moving Targets Using 1D High Range Resolution (HRR) Radar, IEEE Aerospace and Electronic Systems Magazine, pp. 37-43, April 2000.