airborne application of information fusion algorithms to

82
Airborne application of information fusion algorithms to classification P. Valin É. Bossé A. Jouan DRDC Valcartier Defence R&D Canada – Valcartier Technical Report DRDC Valcartier TR 2004-282 May 2006

Upload: others

Post on 06-Apr-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Airborne application of information fusion algorithms to

Airborne application of information

fusion algorithms to classification

P. ValinÉ. BosséA. JouanDRDC Valcartier

Defence R&D Canada – ValcartierTechnical Report

DRDC Valcartier TR 2004-282 May 2006

Page 2: Airborne application of information fusion algorithms to
Page 3: Airborne application of information fusion algorithms to

Airborne application of informationfusion algorithms to classification

P. ValinÉ. BosséA. JouanDRDC Valcartier

Defence R&D Canada - ValcartierTechnical ReportDRDC Valcartier TR 2004-282

May 2006

Page 4: Airborne application of information fusion algorithms to

Author

Pierre Valin

Approved by

Éloi Bossé

Section Head, Decision Support Systems

Approved for release by

Gilles Bérubé

Chief Scientist

This report is the second in a series of 3 reports summarizing the results of PWGSC ContractNo. W7701-6-4081 on Real-Time Issues and Demonstrations of Data Fusion Concepts forAirborne Surveillance (Dr. Pierre Valin, Principal Investigator), and PWGSC Contract no.W2207-8-EC01, on Demonstrations of Image Analysis and Object Recognition DecisionAids for Airborne Surveillance (Dr. Alexandre Jouan, Principal Investigator), under theScientific Authority of Dr. Eloi Bossé. The other 2 reports are entitled Information FusionConcepts for Airborne Maritime Surveillance and C2 Operations (DRDC Valcartier TM2004-281) and Demonstration of Data/Information Fusion Concepts for Airborne MaritimeSurveillance Operations (DRDC Valcartier TR 2004-283).

© Her Majesty the Queen as represented by the Minister of National Defence, 2006

© Sa majesté la Reine, représentée par le ministre de la Défense nationale, 2006

Page 5: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 i

Abstract

The objective of the report is to survey the reasoning frameworks common in the artificial intelligence field for identity information fusion, and to select those that are appropriate to deal with dissimilar data coming from sensors involved in airborne data/information fusion. The Image Support Module (ISM) for the existing Forward-Looking Infra Red (FLIR) will make use of many of these reasoning frameworks in parallel, and actually fuse the results coming from these complementary classifiers. The ISM for the upcoming Spotlight Synthetic Aperture Radar (SSAR) will incorporate some of these reasoning methods in a hierarchical manner to provide multiple inputs to the Multi-Sensor Data Fusion (MSDF) module. The data used are a combination of simulated and real imagery for the SSAR and unclassified airborne data for the FLIR, obtained from the Naval Air Warfare Center at China Lake (USA) through the University of California at Irvine.

Résumé

L’objectif de ce rapport est de répertorier les schémas de raisonnement prévalant dans le domaine de l’intelligence artificielle pour la fusion de l’information sur l’identité, et de choisir les plus appropriés pour les données dissimilaires provenant des capteurs impliqués dans la fusion des données et de l’information sur une plate-forme aéroportée. Le module de support pour l’imagerie appropriée à l’actuel détecteur infrarouge à balayage frontal incorporera plusieurs de ces schémas de raisonnement fonctionnant en parallèle et fusionnera les sorties de ces classificateurs complémentaires. Le futur radar à synthèse d’ouverture aura son propre module de support pour l’imagerie distinct et intégrera quelques-uns de ces schémas de raisonnement de façon hiérarchique pour donner plusieurs résultats au module de fusion de données. Les données utilisées sont une combinaison de données réelles et simulées pour le radar à synthèse d’ouverture et des données déclassifiées pour le détecteur infrarouge à balayage frontal, obtenues du Naval Air Warfare Center de China Lake (USA) par l’Université de Californie à Irvine.

Page 6: Airborne application of information fusion algorithms to

ii DRDC Valcartier TR 2004-282

This page intentionally left blank.

Page 7: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 iii

Executive summary This report is meant to survey information fusion algorithms available for use in the airborne application of maritime surveillance by the CP-140 (Aurora). These algorithms span both the positional fusion and the identity information fusion components of the Multi-Sensor Data Fusion (MSDF) function.

The main objective of this report is to survey the reasoning frameworks common in the artificial intelligence field, and to select those which are appropriate to deal with dissimilar attribute data coming from the sensors involved in airborne data/information fusion. Among these reasoning frameworks, the most notable are fuzzy logic, neural networks, Bayesian reasoning, k-nearest neighbours and Dempster-Shafer evidential reasoning. Each of these will find an application in the design of Image Support Modules (ISM) for the existing Forward-Looking Infra Red (FLIR), and the upcoming Spotlight Synthetic Aperture Radar (SSAR).

Using these reasoning frameworks in a very selective and justified manner, the ISMs are designed according to the physical properties of the imagery, which they aim to classify. The performance of these ISMs will be demonstrated in a stand-alone fashion, leaving for a concluding technical report to integrate them into the full-fledged MSDF for the Aurora.

It will be shown that a hierarchical SSAR ISM is preferred, which is composed of a neural net for category definition, and a Bayes length classifier for line combatant ships, with additional neural nets for subtype definition when available imagery warrants it. The data used for training, validation and testing, will be both simulated imagery from a DRDC Ottawa simulator, and real imagery from a preliminary working version of the SSAR.

In the case of low contrast FLIR imagery, it will be shown that fusion of complementary FLIR classifiers can lead to excellent performance. This will be shown for two different fusers and four different classifiers. The data used will be unclassified airborne data for the FLIR, obtained from the Naval Air Warfare Center at China Lake (USA) through the University of California at Irvine.

Valin, P., Bossé, E, Jouan, A. (2006). Airborne application of information fusion algorithms to classification, Defence R&D Canada Valcartier TR 2004-282.

Page 8: Airborne application of information fusion algorithms to

iv DRDC Valcartier TR 2004-282

Sommaire

Ce rapport se veut une étude des algorithmes appropriés pour la fusion de l’information provenant de capteurs utilisés par l’aéronef de surveillance maritime CP-140 (Aurora). Ces algorithmes sont destinés à la fois au côté positionnel de la fusion multicapteur et à la détermination de l’identité de la cible.

L’objectif principal de ce rapport est l’étude des schémas de raisonnement fréquemment utilisés en intelligence artificielle et la sélection de ceux qui sont appropriés à la fusion des valeurs des différents attributs pouvant être mesurés par les capteurs impliqués dans la fusion aéroportée de données et d’information. Parmi ces schémas de raisonnement, mentionnons la logique floue, les réseaux de neurones, le raisonnement bayésien, la technique dite des k-plus-proches-voisins, et le raisonnement évidentiel de Dempster-Shafer. Chacun de ces schémas formera une application utilisée dans la conception d’un module de support pour l’imagerie pour l’actuel détecteur infrarouge à balayage frontal et le radar à synthèse d’ouverture en cours de développement.

Grâce à l’utilisation sélective et justifiée de ces schémas de raisonnement, la conception des modules de support pour l’imagerie reflète les propriétés physiques de l’imagerie qui doit être classifiée. Leur performance sera démontrée de manière indépendante, laissant à un dernier rapport l’intégration au système complet de fusion pour l’Aurora.

On démontrera qu’un classificateur hiérarchique est préférable dans le cas du radar à synthèse d’ouverture, lequel comprend un réseau de neurones pour la catégorie de vaisseaux, et d’un classificateur bayésien basé sur la longueur des bateaux combattants, avec possiblement un stage terminal d’une banque de réseaux de neurones pour une identification plus poussée, si la quantité d’images en permet l’entraînement. Les données utilisées pour l’entraînement, la validation et le test seront à la fois des données simulées à l’aide d’un simulateur de RDDC Ottawa et des données réelles provenant d’une version préliminaire du radar à synthèse d’ouverture qui est entrée en fonction.

Quant aux images à faible contraste du détecteur infrarouge à balayage frontal, on démontrera que la fusion des sorties de différents classifieurs amène à des résultats très performants. On le démontrera à l’aide de deux méthodes pour fusionner et quatre classificateurs. Les données utilisées sont des données aéroportées provenant du Naval Air Warfare Center de China Lake (USA) et obtenues par l’intermédiaire de l’Université de Californie à Irvine.

Valin, P., Bossé, E, Jouan, A. (2006). Airborne application of information fusion algorithms to classification, R&D pour la défense Canada Valcartier TR 2004-282.

Page 9: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 v

Table of contents

Abstract ...………………………………………………………………………………………i

Résumé ...……………………………………………………………………………………… i

Executive Summary …………………………………………………………………………..iii

Sommaire ……………………………………………………………………………………..iv

Table of contents ………………………………………………………………………………v

List of figures ………………………………………………………………………………..vii

List of tables ………………………………………………………………………………..viii

1. Introduction ................................................................................................................... 1

2. Identity information fusion algorithms.......................................................................... 4 2.1 Fuzzy logic ....................................................................................................... 4

2.1.1 Example of fuzzification rules............................................................. 5 2.1.2 Example of combination process......................................................... 6

2.2 K-nearest neighbours........................................................................................ 7 2.3 Neural networks (NN) ...................................................................................... 8 2.4 Bayesian reasoning........................................................................................... 9 2.5 Dempster-Shafer evidential reasoning............................................................ 12

2.5.1 Combination rules ............................................................................. 13 2.5.2 Truncated Dempster-Shafer for real-time operation ......................... 14

3. SSAR ISM design and stand-alone performance ........................................................ 16 3.1. Implementation............................................................................................... 17

3.1.1. Step 1: Target segmentation .............................................................. 17 3.1.2. Step 2: Ship length estimate, length and eccentricity tests............... 17 3.1.3. Step 3: Ship category and ship type declarations .............................. 19

3.1.3.1. Ship category ................................................................. 19 3.1.3.2. Line ship type................................................................. 22

3.1.4. Step 4: Ship class declaration ............................................................ 26 3.2. Tests................................................................................................................ 28

Page 10: Airborne application of information fusion algorithms to

vi DRDC Valcartier TR 2004-282

3.2.1. Tests on simulated images................................................................. 28 3.2.2. Test on XDM images ........................................................................ 32

3.3. Neural net for determining category............................................................... 35 3.3.1. Category NN training, validation and testing on profile vectors....... 35 3.3.2. Validation of category NN on real imagery ...................................... 37

4. FLIR ISM and its stand-alone performance ................................................................ 38 4.1. Feature selection............................................................................................. 38 4.2. Desired output classes .................................................................................... 39 4.3. Fusion approach.............................................................................................. 40 4.4. Frequency distribution.................................................................................... 41 4.5. Classifiers ....................................................................................................... 43

4.5.1. DS classifiers ..................................................................................... 43 4.5.2. Additive Bayes classifier................................................................... 44 4.5.3. K-nearest neighbours classifier ......................................................... 45 4.5.4. Neural net classifier........................................................................... 46

4.6. Fusers.............................................................................................................. 46 4.6.1. Results of the first fuser approach with a neural net ......................... 46 4.6.2. Results of the second fuser by a measure-based method................... 47 4.6.3. Comparing other approaches on the same real FLIR data................. 48 4.6.4. Conclusions on FLIR ISM classifiers and fusers .............................. 49

5. Conclusions ................................................................................................................. 51

6. References ................................................................................................................... 52

7. Acronyms .................................................................................................................... 55

8. Annexes ....................................................................................................................... 59 8.1. General data/information fusion sources........................................................ 59 8.2. Specific related data/information fusion sources............................................ 60

9. Distribution list ............................................................................................................ 63

Page 11: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 vii

List of figures

Figure 1 MSDF system with the crucial component of identity information fusion ................. 3

Figure 2. Possible membership functions for the fuzzy variable speed...................................... 6

Figure 3. Example of the 3-nearest neighbour rule based on Euclidean distance ...................... 8

Figure 4. Neural Net for a FLIR classifier................................................................................. 9

Figure 5. Probability density distribution model for line ship length....................................... 10

Figure 6. Probability distributions of line combatants for different a priori probabilities........ 11

Figure 7. High-level description of the hierarchical ship classifier design .............................. 16

Figure 8. Geometry definition of ship heading and radar LOS ................................................ 18

Figure 9. Backpropagation Neural Net architecture used for ship category determination...... 22

Figure 10. Probability density distribution model for line ship length..................................... 23

Figure 11. Length distribution of line combatants versus merchant ships ............................... 24

Figure 12. Length distribution for each merchant type ............................................................ 25

Figure 13. Design of the Neural Net submodules (feature extraction and node connections) . 27

Figure 14. An example of a simulated SAR Image using the modified SARSIM2 simulator . 29

Figure 15 Training set and validation set curves for 100 or 5000 examples and 6 or 19 neurons .............................................................................................................................. 36

Figure 16. Generalization error on a fixed test sample vs. the number of hidden neurons ...... 37

Figure 17. Typical imagery for the eight classes. ..................................................................... 40

Figure 18. Frequency graph for attribute 1 binned in increments of 200. ................................ 42

Page 12: Airborne application of information fusion algorithms to

viii DRDC Valcartier TR 2004-282

List of tables

Table 1 - Fuzzification rules for an ESM sensor ....................................................................... 5

Table 2. Inference rules combining the last CoPos with the input possibility InPos.................. 7

Table 3. A priori probabilities for four types of line combatants ............................................. 11

Table 4. Bayesian line type performance for equal a priori probabilities................................. 12

Table 5. Production rules used to train the ship category Neural Net ...................................... 20

Table 6. Confusion matrix for merchant ship type ................................................................... 25

Table 7. Ship database used for simulated imagery ................................................................. 28

Table 8. Simulation parameters used for the tests ................................................................... 29

Table 9 Confusion matrix for ship category declarations........................................................ 30

Table 10 Confusion matrix for 1-proposition ship type declarations ...................................... 30

Table 11 Confusion matrix for 2-proposition ship type declarations ...................................... 31

Table 12. Detailed ship length estimation (metres) and confidence level declarations (%) for data subset H = +80°. Boldface indicates correct declarations. ....................................... 31

Table 13 Detailed ship length estimation (metres) and confidence level declarations (%) for the XDM images set. Boldface indicates correct declarations. ........................................ 32

Table 14. Performance of the Knowledge-Based rules by category......................................... 37

Table 15 Performance of the NN by category ......................................................................... 37

Table 16. Confusion matrix for the DS classifier with AIR = 74.5%....................................... 43

Table 17. Confusion matrix for the modified Bayes classifier with AIR = 77.7% .................. 45

Table 18. Confusion matrix for the 3-NN classifier with AIR = 94.8%................................... 45

Table 19. Confusion matrix for the neural network classifier with AIR = 92.7%.................... 46

Table 20. Fusion results of classifiers with feed-forward neural networks .............................. 47

Table 21. Measure-based method confusion matrix................................................................. 48

Page 13: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 1

1. IntroductionThe first report of this series entitled Information Fusion Concepts for Airborne MaritimeSurveillance and C2 Operations addressed the problem of Multi-Source Data Fusion on boardthe airborne maritime surveillance CP-140 (Aurora) aircraft. To that end, a survey of theconcepts that are needed for data/information fusion was made, with the aim of improvingCommand and Control (C2) operations. All of the current and planned sensors were describedand their suitability for fusion discussed. Relevant missions for the aircraft were listed andthe focus was made on a few important ones that make full use of the Aurora s sensor suite.For the identity information component of MSDF, a comprehensive set of a priori databaseswas constructed, that contained all the information/knowledge about the platforms likely to beencountered in the missions. The most important of these was the Platform DataBase (PDB),which lists all the attributes that can be measured by the sensors (with accompanyingnumerical or fuzzy values), and these can be of three types: kinematical, geometrical ordirectly in terms of the identity of the target platform itself.

With this background information covered, it became clear that

1. the chosen scenarios contain slow moving ships, so tracking algorithms can be simpleKalman filters;

2. in one scenario (Maritime Air Area Operations), data association should be simple,while in another (Direct Fleet Support), it could occasionally be challenging;

3. the aim of all missions is to identify ships, so;

4. the imaging sensors should be used as much as possible; and

5. classifiers should be designed to extract ship identity information from their imagery.

Therefore the objective of this second report is to survey the reasoning frameworks commonin the artificial intelligence field for identity information fusion, and to select those that areappropriate to deal with dissimilar data coming from sensors involved in airbornedata/information fusion. Since the identification of the ships in these scenarios is of primeimportance, this report will also concentrate on how post-processing the imagery from the twoimaging sensors on the Aurora can lead to inputs for identity information fusion. Finally, thelast report of this series will demonstrate how this identity information deduced from imagerycan be combined with other identity information, such as provided by an ESM.

The Image Support Module (ISM) for the existing Forward-Looking Infra Red (FLIR) willmake use of many of these reasoning frameworks in parallel, and actually fuse the resultscoming from these complementary classifiers. The upcoming Spotlight Synthetic ApertureRadar (SSAR) ISM will incorporate some of these reasoning methods in a hierarchicalmanner to provide multiple inputs to the Multi-Sensor Data Fusion (MSDF) module.

Page 14: Airborne application of information fusion algorithms to

2 DRDC Valcartier TR 2004-282

The data used are a combination of simulated and real imagery for the SSAR and unclassifiedairborne data for the FLIR, obtained from the Naval Air Warfare Center at China Lake (USA)through the University of California at Irvine.

According to the updated 1999 Joint Directors of Laboratories (JDL) classification of DataFusion (DF) levels, one can expect to reason at correspondingly different levels (Steinberg,Bowman & White, 1999):

• Level 0: sub-object assessment should require only pre-processing, possiblyselected from a priori knowledge of possible acquisition problems, or difficultsituations. Examples are image pre-processing to get the best resolution from rawdata, or raw radar returns thresholded to give contact data.

• Level 1: single object refinement should involve evidential reasoning over singleobject kinematics and attributes, towards the goal of obtaining the best platformID or at least some level of the taxonomy tree. Examples will be giventhroughout this report. This level is often referred to as Multi-Sensor Data Fusion(MSDF).

• Level 2: situation refinement, a.k.a. Situation and Threat Assessment (STA)should involve reasoning over groups of objects and proceed by higher inferencerules involving doctrinal and contextual information. Some concepts relevant to apriori information, that should be stored in databases, were explored in the firstreport. An upcoming report on higher level fusion will expand on this topic.

• Level 3: implication refinement, should involve reasoning over plan alternativesto suggest plan decisions. Since the plan alternatives have to take into account theplans that the red force may envisage, given its understanding of the present andforeseen situation, the same concepts documented for level 2 are needed here.

• Level 4: process refinement should involve reasoning over own-ship andenvironmental conditions in order to perform better sensor management and thusclose the Observe, Orient, Decide, Act (OODA) loop. It should also refine thedata fusion process itself, taking into account the best algorithms, givencontextual information such as target density, clutter, expected target manoeuvres,etc.

This report concentrates mostly on Level 1 single object refinement (mostly algorithmic) in amulti-sensor, multi-target environment, for identity information fusion, which is checkmarkedin Figure 1 below. Some pre-processing of the imagery can however be considered as Level0, such as thresholding ship imagery to determine the outline of ships in FLIR imagery, ordenoising SAR imagery. This is referred to as Input Data Preparation (also checkmarked inFigure 1). Since imaging sensors are cued to interesting tracks in the Internal System TrackData Store (ISTDS), the data association step just verifies that the target has been properlyimaged. Both of these items are also checkmarked in Figure 1.

Page 15: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 3

Figure 1 MSDF system with the crucial component of identity information fusion

This report is therefore organized as follows:

• Section 2 presents five reasoning frameworks for the identity information fusionshown in Figure 1 above. These reasoning methods can treat uncertain andincomplete information, and will be used in the ISMs discussed in the next sections.

• Section 3 will discuss a hierarchical SSAR ISM and demonstrate its stand-aloneperformance on simulated and real imagery. Each step of the classifier will make useof methods discussed in Section 3.

• Section 4 will discuss several FLIR classifiers and demonstrate their stand-aloneperformance. In a further refinement, the outputs of combinations of thesecomplementary classifiers will be fused by two methods discussed in Section 3.

• Section 5 presents some conclusions.

The ultimate goal is of course to provide the commander with the desired situation awareness,particularly the ID of target enemy ships, early and unambiguously enough, for evasivemanoeuvres to be performed by the CP-140 aircraft.

Page 16: Airborne application of information fusion algorithms to

4 DRDC Valcartier TR 2004-282

2. Identity information fusion algorithms

The following five reasoning frameworks are introduced as they pertain to MSDF Level 1fusion and examples are given for each:

a) fuzzy logic,

b) k-nearest neighbour,

c) neural networks (NNs),

d) Bayesian approach (with a priori information), and

e) Dempster-Shafer (DS) approach with several variants.

The design of ISMs will make use of the latter four, while fuzzy logic will be used to preparethe input physical data for proper construction of propositions for the MSDF module.

2.1 Fuzzy logicFuzzy logic deals with approximate modes of reasoning. In standard logic, a proposition iseither true or false. In fuzzy logic a proposition has a parameter value, called a membershipvalue, ranging from 0 (completely false) to 1 (completely true). Zadeh s fuzzy logic (Zadeh,1965 and 1968) is a well-defined formalism that describes the fuzzy propositions andcombination rules to create syllogisms and inferencing for using fuzzy probability. Fuzzylogic application to the ID estimation problem is not as well documented in the literature, asare the Bayesian or DS approaches.

Fuzzy logic will be used extensively in data/information fusion to transform physicalmeasurements, which are either

• only approximately measured due to inherent noise (Gaussian or not), or

• for which there can be an incompletely known bias that must be corrected for, as inspeed relative-to-ground vs. speed relative-to-air,

into acceptable fuzzy declarations about the physical parameter measured (in this case speedrelative to the medium). These fuzzy declarations will then be mapped into a list of possibleplatform IDs with their associated likelihoods (Bayesian probabilities or basic probabilityassignments in the Dempster-Shafer sense).

In this section, despite the absence of literature about the attribute combination for IDestimation, a simple fuzzy logic algorithm that combines the ESM sensor evidence isdeveloped as a pedagogical example. The algorithm below is an ad hoc algorithm thatimitates the phenomenology coming out of a fuzzy logic estimator. The example does notcover all particularities of fuzzy logic rules, since the information comes from a single ESMsensor and is always of the same form. Nevertheless, the example permits an appreciation ofthe fuzzification of the input and the defuzzification of the output processes with theirinterpretation.

A typical fuzzy logic algorithm (used here as an estimator) is composed of three fundamentalsequential processes:

Page 17: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 5

1. the input fuzzification process, which combines the fuzzy variables,

2. the logic combination rule,

3. the output defuzzification process,

together with the knowledge base source that contains the inference rules.

The input data from the sensor are first interpreted and put in a fuzzy variable form in order toperform normalization and interpretation of the input data. When the data are all numerical,all of the same type and from a single source, the interpretation task is relatively easy. Thecombination process performs the arithmetic or the algebra that is necessary to combine theinput fuzzy variables, under the recommendation of the inference rules or logic combinationrules. The defuzzification process performs the opposite task of the first process: ittransforms the fuzzy statements into the appropriate information type.

2.1.1 Example of fuzzification rulesTo be able to apply this method, the ESM sensor is required to output a list of candidateemitter identities with a Confidence Level (CL) associated with each emitter. This CL shouldbe a numerical representation of the quality of the fit between the emitter characteristics andthe measured parameters. For each pre-determined CL range a fuzzy statement is associatedas shown in Table 1. As ranges descend, the fuzzy statements are less and less categoricabout the possibility of the ESM declaration. The third column of Table 1 shows thenumerical weight, which is given to the fuzzy statement and used by the combination process.

Table 1 - Fuzzification rules for an ESM sensor

ESM Computed CL Associated Fuzzy Statement Numerical Weight

1>Conf>0.99 Extremely Possible 3

0.99>Conf>0.95 Highly Possible 2

0.95>Conf>0.70 Very Possible 1

0.70>Conf>0.50 Possible 0

0.50>Conf>0.25 Moderately Possible -1

0.25>Conf>0.02 Slightly Possible -2

0.02>Conf>0.005 Almost Impossible -3

0.005>Conf>0 Impossible -4

As already mentioned, the fuzzification process can be generalized in terms of membershipfunctions, where the crude binning of the example above is replaced by overlapping functions

Page 18: Airborne application of information fusion algorithms to

6 DRDC Valcartier TR 2004-282

which sum up to one, usually of triangular or trapezoidal shape. This is illustrated in Figure 2below for speed declarations (Valin, 2000 and 2001) ranging from Very Slow (VS) toMedium (M) and finally Very Fast (VF).

Figure 2. Possible membership functions for the fuzzy variable speed

2.1.2 Example of combination processThe combination process is performed in two steps. Firstly, the input possibility (InPos) iscombined with the previous combined possibility (CoPos) and secondly, the last fivecombined possibilities are averaged to get the track level output possibility (Pos). For eachemitter Ei, the track level possibility at any time tn is computed by

where the last equation gives the initialization condition. Pos, CoPos and InPos are theoutput track level possibility, the combined possibility and the input possibility (contactlevel), respectively. The index h, chosen here, as an example, to be the lesser of n and 5,restricts the average to the last five values. The ⊗ symbolizes the application of thecombination rule, which is represented by the following inference matrix. The matrix inTable 2 provides the result (using the numerical weight) of the combination of the new inputinformation with the previous one.

n

i in ll=n-h+1

1Pos( , ) = CoPos( , ) h = Smaller(5,n)t tE Eh ∑i i il l-1 lCopos( , ) = Copos( , ) InPos( , )t t tE E E⊗

i i i1 1 1Pos( , ) = CoPos( , ) = InPos( , )t t tE E E

AIR SPEED

MEM

BER

SHIP

VS S M F VF

Page 19: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 7

The rationale concerning the choice of the elements is essentially system performancedependent and based on the experience of the designers with the problem of ID estimation.As the lack of symmetry among the numbers in Table 2 shows, the combination rules favourthe new input data more than the previously combined data. This, however, iscounterbalanced by the averaging process over the last five samples.

Table 2. Inference rules combining the last CoPos with the input possibility InPos

Last CoPos

InPos 3 2 1 0 -1 -2 -3 -4

3 3 3 3 3 2 1 0 -1

2 3 3 3 2 1 0 -1 -2

1 3 3 2 1 0 -1 -2 -3

0 2 2 1 0 -1 -2 -3 -4

-1 1 1 0 -1 -2 -3 -3 -4

-2 0 0 -1 -2 -3 -3 -3 -4

-3 -1 -1 -2 -3 -3 -3 -3 -4

-4 -2 -3 -3 -4 -4 -4 -4 -4

Finally, defuzzification rules can proceed by a table similar to Table 1 (after rounding Pos tothe nearest integer, and reading right-to-left), or by slight variations of it, giving the finaloutput CL of Ei,.

2.2 K-nearest neighboursWhen trying to identify a platform type or class via a set of N measured attributes, such as isdone in imagery classifiers, it is often convenient to view how the different classes group inthis N-dimensional space. If it happens that these classes group together rather tightly in thisN-dimensional space, with little overlap between classes, then one can identify the class of anew object by measuring the proximity of its measured attributes to those of different classes.

Thus a k-nearest neighbour classifier finds the k nearest neighbours based on a metric distanceand returns the class with the greatest frequency. The k-nearest neighbour rule is attractivebecause no prior knowledge of the distribution is required. This simple rule has given goodresults in the classification domain, even for the simple choice of k=3, used here, andillustrated in Figure 3 below (Tremblay & Valin, 2003).

Page 20: Airborne application of information fusion algorithms to

8 DRDC Valcartier TR 2004-282

Figure 3. Example of the 3-nearest neighbour rule based on Euclidean distance

The traditional criticism of the nearest neighbour rule points to the large storage spacerequirement for the entire training set and the seeming necessity to search for the k nearestneighbours in the entire training set in order to make a single object classification. However,it tends to outperform most other classification schemes.

2.3 Neural networks (NN)NNs are particularly useful when one has a large volume of data to reason over, such as alarge collection of image features that are to be used for the purposes of classification. This isthe case of FLIR and SAR imagery. A typical NN application starts out by extracting featuresfrom typical imagery, e.g., invariant moments and auto-regressive model parameters for FLIRship images. Depending on the size of the training set and the independence of the inputattributes, a number of hidden layers containing a selected number of neurons with intricateinterconnections is chosen and the NN is trained on a fraction of the available imagery. Theremaining imagery is kept for the validation and test sets.

A practical example of such a FLIR classifier is shown in Figure 4 (Tremblay & Valin, 2002,Valin 2002), where 11 inputs are used to identify eight types of ships obeying a taxonomy treesimilar to the one discussed previously, namely: destroyer, frigate, cruiser, destroyer withguided missiles, landing assault tanker, auxiliary oil replenishment, civilian freighter, andcargo/container. The taxonomy tree regroups the first four as line ships, the last two asnon-naval , while line and the landing assault tanker form a combatant group

(borrowing from STANAG 4420 terminology). The auxiliary oil replenishment ship is the

Page 21: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 9

lone example of the non-combatant type. There were 2545 images available (close to the ruleof thumb 30×11×8).

50 20

IN

OUT

MO

MEN

T IN

VA

R IA

NTS

A R

MO

DEL

PAR

AM

S

.....

.....

.....

.....

.....

.....

LINE

CO

MB

ATA

NTDESTROYER

FRIGATE

CRUISER

GM DESTROYER

ASSAULT TANKER

CONTAINER/CARGO

CIVIL FREIGHTER

AUX OIL REPL NC

NM

Figure 4. Neural Net for a FLIR classifier

The main disadvantage of using any NN is that one has to view it as a black box , where theinformation contained within the NN encodes the knowledge gained through the trainingstage, without providing any explanation as to how it actually does it.

More details about the use of NN in ISMs will be provided in the next two sections whenSSAR and FLIR ISMs are treated separately. In Figure 4, AR refers to Auto-Regressivemodel parameters, NC to Non-Combatant and NM to Non-Military, as will be discussed laterin Section 4.

2.4 Bayesian reasoningBayesian reasoning is at the foundation of many tracking and evidential reasoningapproaches. One concentrates here on its application as an alternative classifier to the NNapproach outlined in the previous section, for ship type (rather than category), extracted fromSAR imagery. If the confidence level on a line declaration is high enough (let s say > 50%)from the NN, then an estimate of the line ship type should be initiated. This is performed

Page 22: Airborne application of information fusion algorithms to

10 DRDC Valcartier TR 2004-282

using a Bayes classifier based on the length distributions of frigates, destroyers, cruisers,battleships and aircraft carriers (again borrowing from STANAG 4420 terminology). Shiplengths for this statistical analysis have been obtained by browsing Jane s Fighting Ships andtheir probability density distributions approximated by the curves shown in Figure 5 (Valin,Tessier & Jouan, 1999).

Figure 5. Probability density distribution model for line ship length

It should be noted that the aircraft carrier distribution is the only one that is not nearlyGaussian, and it is also the one with the smallest number of representative ships from Jane s,even after merging with the battleship class. For this reason, these curves are oftenapproximated in practice by Gaussians, as will be shown in Section 3.1.3.2.

Given that a ship length range has been evaluated from the ship end points in the imagery, onecalculates the mean a posteriori type probability Pavg(t|s) that a ship of length s belongs totype t by averaging the standard Bayes rule over the entire ship length range,

P t s Avg p s t P tp savg

length range( | ) ( | ) ( )

( )=

p s p s t P ti ii

( ) ( | ) ( )= ∑

where p(s|t) and P(t) are the probability density distribution and the a priori probability oftype t, respectively. Mean a posteriori type probabilities are re-normalized, in order that theirsum is unity. Naturally the a priori probability P(t) depends on the context and could be set bythe radar operator prior to the mission.

Page 23: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 11

Given the following four a priori distributions of Table 3

Table 3. A priori probabilities for four types of line combatants

Figure Frigate Destroyer Cruiser Aircraft Carrier

A 0.25 0.25 0.25 0.25

B 0.10 0.10 0.70 0.10

C 0.10 0.70 0.10 0.10

D 0.15 0.70 0.15 0.0

one obtains the following probabilities shown in Figure 6 below, with the notationP(C|x)=Pavg(t|s) :

Figure 6. Probability distributions of line combatants for different a priori probabilities

The performance of this Bayesian classifier on the actual database of 145 ships used, given anequal 0.25 a priori probability of occurrence of any particular type, is shown in the confusionmatrix of Table 4.

Page 24: Airborne application of information fusion algorithms to

12 DRDC Valcartier TR 2004-282

Table 4. Bayesian line type performance for equal a priori probabilities

Frigate Destroyer Cruiser Aircraft Carrier

Frigate 37 13 - -

Destroyer 9 34 4 -

Cruiser - 3 13 2

Aircraft Carrier - - 10 10

2.5 Dempster-Shafer evidential reasoningDempster-Shafer (DS) evidential reasoning (Dempster, 1967, & Shafer, 1976) provides analternative to Bayesian reasoning in the presence of possibly strongly conflicting, uncertainand incomplete information. Let s recapitulate here the salient features of DS theory, sinceDS reasoning will be used extensively in the FLIR ISM to be discussed later, and also in theidentity information component of the MSDF systems shown previously in Figure 1. Thedemonstration of DS for identity information fusion will be a subject of the third report of thisseries.

Using the language of the set theory, a proposition is a set where each irreducible element(elementary proposition, or singleton) is an element of complete set named the frame ofdiscernment usually noted Θ. If N is the cardinality of Θ (number of elements), there will be2N possible propositions that can be raised from Θ. The set of all possible propositions isusually noted as the power set 2Θ.

The Basic Probability Assignment (BPA), also known as mass, is a value associated by asensor with the proposition that indicates the level of confidence or certainty given by thesensor. Since sensors rarely provide an estimate of their own working state, the value of thismass has to follow some heuristics.

The mass is thus a quantity output by a sensor which infers information only about theproposition it has deduced and nothing at all about the propositions which are subsets ofA. A mass close to one is a strong indication that this proposition is probably true. However,a mass close to zero does not necessarily mean the opposite, since this depends on the massgiven to other alternative propositions, including the ignorance. As an aside, the total massassignment to all IDs proposed by the sensor should never be less than 0.5 because then sucha declaration would provide the ignorance with more than 0.5, defeating the purpose ofincluding the sensor report for a better ID through identity information fusion!

In DS all the information about a proposition is obtained from a pair of quantities, which arecomputed from the masses of various related propositions. The DS approach defines theconcept of an evidential interval, denoted [Bel, Pls], where the lower bound, the belief Bel,and the upper bound, the plausibility Pls, are obtained from

Pls(A) = 1 - Bel( A)¬

|A|2

i ii=1

Bel(A) = m( ) where all AA A ⊂∑

Page 25: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 13

The belief in a proposition represents the minimal commitment, which can be extractedby the masses from various sensor-declared propositions Bj. All Bj that are subsets of Acontribute to that minimal commitment. The plausibility of a proposition represents the

maximal commitment from the sensor declarations. All Bj that have at least an element ofin common with A contribute to the plausibility. Consequently the Bayesian ProbabilityProb(A) satisfies:

When the question arises as to which reasoning framework to use for compounding, or fusing,successive sensor declarations about a given attribute, the best choice seems to be the DSformalism because it can (Valin & Bossé, 2003):

• process incomplete information, implying that ignorance should be a concept definedmathematically rigourously (ignorance corresponds to the complete PDB in DSreasoning);

• not require a priori information, sometimes impossible to gather for a given mission,which rules out Bayes reasoning (Bayesian would have to split the ignorance equallyacross all other platforms, which can lead to large computational loads);

• handle conflicts between contact/track, implying that conflict should be defined as aconcept mathematically (conflict is calculated as the sum of BPAs with null setintersection in DS);

• have a real-time method, which means that DS truncation is essential (rules to thateffect have been empirically determined and benchmarked);

• present the operator with the best ID, i.e., give preference to singletons, then the nextbest thing, i.e., doublets, then triplets, etc., or equivalently ;

• have the possibility of computing beliefs in higher nodes of the tree, as sums of BPAsof the underlying branched structure; and

• have the possibility of providing several different functions as a decision aid, whichcan be, in the DS scheme, the plausibility or the expected utility interval, for example.

2.5.1 Combination rulesConventional literature allows for the traditional combination rules where conflict can existbetween the latest sensor declaration about a target and the existing cumulative knowledgeabout that platform in the track database. Some more recent literature allows for not declaringa conflict at a finer level of the taxonomy tree but rather for throwing back the conflict tothe first coarser level of taxonomy where conflict does not exist. These combination rules arereferred to as the orthogonal sum and the hierarchical orthogonal sum, respectively, andare discussed below.

Bel(A) Prob(A) Pls(A)≤ ≤

Page 26: Airborne application of information fusion algorithms to

14 DRDC Valcartier TR 2004-282

Suppose Ai and Bj are members of two lists of sensor propositions, which are statisticallyindependent, i.e., their mass values are not correlated or computed from the a prioriknowledge of each other. The numbers of members in each list are I and J respectively. Aswith Bayes formula, which combines independent probability measurements, DS evidentialtheory has a way to combine independent mass measurements to get new mass data on theoutput propositions and also some propositions obtained from combined propositions. Theframe of discernment Θ should be the same for both lists of declarations. If the massesassociated with propositions Ai and Bj are independent, the new belief of the intercept of bothpropositions is obtained from DS s rule of combination, a.k.a. the orthogonal sum:

where K is called the conflict. The applications of these two formulas are not straightforwardwhen the amounts I and/or J are large. When the number of reported propositions is high, thecombination rule has a tendency to increase the number of propositions by creating new ones.The problem is an NP-hard one and a truncation scheme has to be devised. In other words,despite the fact that the orthogonal sum takes conflict into account in a mathematically correctway, this may not be sufficient for the stable operation of DF in a decision aid role.

2.5.2 Truncated Dempster-Shafer for real-time operationThe DS theory of evidence, because it assigns BPA to subsets of a PDB and not to itsindividual elements, can generate an exponential number of propositions, which is not veryuseful if used in real-time applications. This is where an approximation scheme has to beimplemented via some truncation algorithm. The Truncated DS (TDS) scheme retainspropositions according to the rules below (Boily & Valin, 2002), or equivalent ones that havebeen documented in the literature (Valin & Boily, 2000):

1. All combined propositions with BPA > MAX_BPM are kept.

2. All combined propositions with BPA < MIN_BPM are discarded.

3. If the number of retained propositions in step 1 is smaller than MAX_NUM, it retains,by decreasing BPA, propositions of length 1.

4. If the number of retained propositions in step 3 is smaller than MAX_NUM, it doesthe same things with propositions of length 2.

5. Repeat a similar procedure for propositions of length 3.

6. If the number of retained propositions is still smaller than MAX_NUM, it retainspropositions by decreasing BPA regardless of length.

The propositions that are discarded are either returned to the ignorance (in most applications)or split between the ignorance and the remaining propositions (in some applications).Optimization and benchmarking on complex realistic scenarios have catalogued the possible

i j k

i jk

i, j where =CA B

m( )m( )A Bm( ) =C 1 - K∩∑ ∑

i j

i ji, j where =A B

K = m( )m( )A B∩ ∅

∑ ∑

Page 27: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 15

values of the three parameters MAX_BPM, MIN_BPM, and MAX_NUM, as well as theirinterrelationship and their dependence on PDB size.

A slight variant of this scheme is presently being implemented for the US Navy s LAMPShelicopter for real-time operation. DS theory is also used in foreign naval programs such asthe F128 German frigates (Henrich, Kausch & Opitz, 2003) and the Finnish Fast Attack CraftSquadron 2000 (Henrich, Kausch & Opitz, 2004). It is currently being tested in COMDATtrials for the Halifax class Canadian Frigate upgrades.

Page 28: Airborne application of information fusion algorithms to

16 DRDC Valcartier TR 2004-282

3. SSAR ISM design and stand-alone performance

We assume that the ship target has been detected using the conventional radar search modeand that the SAR antenna is oriented correctly for the image acquisition. At the operator srequest, the target is imaged at high resolution using the Spotlight Synthetic Aperture Radar(SSAR) mode. Note that for ships at sea, ship motion provides a large fraction of the signalthat forms the imagery, so the imagery is sometimes referred to as Inverse SAR (ISAR)imagery. It is also assumed that the SSAR image generation algorithms provide adequateplatform and target motion compensations to ensure minimal blurring effects (otherwise, thiscan impose a severe limit in classification performance). The resulting target image, alongwith the target-aircraft distance, the platform altitude and the ship heading (which could beacquired from a previous long tracking time), constitute the input data for the system.

The SSAR ISM contains many of the reasoning frameworks of the previous section, namelyan NN for ship category determination that encodes Knowledge Base (KB) rules (line vsmerchant ship) and a Bayes classifier for length of line combatant ships. Figure 7 belowdescribes the main architectural features of the SSAR-based ship classifier (Valin, Tessier &Jouan, 1999). Steps 1-3 have been implemented and will be discussed in this section. Thedashed Step 4 has only been designed, not implemented.

Image segmentation

KB rules for ship category(Line or Merchant)

Ship length

Ship class modules dispatcherAccording to Length, Category, Orientation

Ship class modulefor Line 130-210 m

(e.g. Spruance Destroyer)

Ship class modulefor Line 100-160 m

(e.g. Mackenzie Frigate)

Ship class modulefor Merchants 150-250 m

(e.g. Sealand)o o o o o o

Ship orientation

Fusi

on T

hrou

gh E

vide

ntia

l Rea

soni

ng

STEP 1

STEP 2

STEP 3

STEP 4Ship type

(e.g. Frigate)

If Line

Figure 7. High-level description of the hierarchical ship classifier design

Page 29: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 17

As Figure 7 shows, the SSAR ISM is capable of providing to the MSDF module a set of fouridentity propositions obtained directly from actual measurements or as the results of sub-classification schemes using Artificial Intelligence rules or Bayesian estimators. These are

1. ship length (best guess and interval),

2. ship category (line or merchant), via an NN trained on classified KB rules,

3. ship type (if line) according to STANAG 4420 classification, using a Bayes classifier,

4. ship class, if enough available imagery can be obtained for training and testing.

3.1. Implementation

3.1.1. Step 1: Target segmentationTarget segmentation from the ocean clutter is done in two steps: noise removal and mergingof small regions. Noise removal is performed column by column first, then row by row, inorder to remove linear strikes and artifacts caused by rotating ship antenna and/or SAR imagegenerator defects. The noise mean µ and variance σ are estimated on the image border foreach column/row assuming the target is centred in the image window. This is followed by asimple pixel intensity thresholding where thresholds have been set empirically to (µ + 4σ) and(µ + 2σ) for columns and rows, respectively.

Residual small non-connected clutter regions are further discarded if they are isolated enoughand their spatial extent is above an empirically determined threshold (four and five pixels forsimulated and eXperimental Demonstration Model (XDM) images, respectively). The C-routine that performs this task is the one implemented in KHOROS 2.0 (lvlabel.c). However,KHOROS copyright permits the use of source codes for research activities only. A completerecoding of the routine will be necessary if one wants to insert this routine into a commercialproduct.

The resulting segmented image is made binary and sent to the ship centreline detectionalgorithm.

3.1.2. Step 2: Ship length estimate, length and eccentricity testsShip length is a simple but very discriminating feature in ship type classification, especiallyfor large Line ships. Ship length L is obtained by identifying the target's end-points. Shipend-points can be obtained by estimating the ship centreline from the maximum peak of theHough transform of the segmented image. This technique is more robust than a least-squaresfit through the target or principal axes detection because a large amount of cross-rangescatterer spreading in some images tend to bias the centreline estimation. In order to saveimplementation time, routines performing Hough transform mapping between maxima of theaccumulator space and ship centreline are the ones implemented in the HOUGHTOOL(Kälviäinen, et al., 1996) software (Lappeenranta University of Technology, Department ofInformation Technology, Finland). The same restriction regarding the use of these routines incommercial products above applies here. The Hough Transform was developed by Paul

Page 30: Airborne application of information fusion algorithms to

18 DRDC Valcartier TR 2004-282

Hough in 1962. In the last decade it became a standard tool in the domain of artificial visionfor the recognition of straight lines (the use here), circles and ellipses.

Once ship end-points are determined, ship length can be calculated using either:

L l lsr cr= +2 2 L l Hsr= / sin

where lsr and lcr are the (slant)-range and cross-range ship length in meters measured on theSAR image (taking into account image resolution) and H is the ship heading angle (Figure 8).These equations can be used only on SAR images for which cross-range resolution is known.In practice, SAR antenna depression angle is very low and thus aircraft altitude is not takeninto account in the calculation of L.

Η ≤ 0

ShipHeading

AircraftTrajectory

Η ≥ 0

SAR LookDirection SAR Look

DirectionAircraft

Trajectory

Cro

ss-R

ange

(Slant-) Range

Figure 8. Geometry definition of ship heading and radar LOS

The length L represents a lower bound for the ship length because ships do not necessarilyscatter energy along their entire length. In addition to computing the minimum ship length L,a maximum ship length is estimated based on two assumptions:

a. Ship scatters along at least 100L/(20+L)% of its length (an empiricallydetermined ratio);

b. Segmented target length is no less than 90% of the target length visible on theinitial image.

Page 31: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 19

Maximum ship length is thus computed using (20+L)/0.90. In addition, error on aspect angleH should be taken into account, typically ± 5° from a Track-While-Scan system. Here oneassumes H is known exactly. When cross-range resolution as well as ship heading are known,both ship length range estimates from the two equations are combined to minimize the risk oferror. In our tests, the minimum and maximum ship lengths of the union of both ship lengthintervals is taken.

Following the target length estimate, two screening steps are performed. First, if theminimum target length is larger than 80 m, then the original image is sent to the first targetclassification step (Step 3); otherwise it is labelled as a small ship, which means that anysubsequent ship identification is hazardous (our current ship database does not contain smallships yet). Besides, a small ship declaration can also be caused by a failure of thesegmentation or the ship end-points detection steps. Second, knowing the ship end-pointsallows the delimitation of the Region of Interest (ROI) around the principal target in thesegmented image. The eccentricity of this target is calculated and serves as a screening stepto discard objects that are not elongated enough due to possible image artifacts or blurring. Inaddition, ships are always best recognized, even for humans, from broadside or plan views. Inour implementation, targets having eccentricity lower than 3.0 are not processed further.

3.1.3. Step 3: Ship category and ship type declarationsStep 3 is the first target classification step in the hierarchy. It provides

• a high-level identification declaration of the ship category, that are Line,Merchant or Unrecognized ship, and

• a medium-level declaration of Line ship type.

3.1.3.1. Ship categoryThe category declaration is based on the gross spatial distribution of ship radar scatterers. Ourworking hypothesis is one which has been put forward by a former DRDC-O scientist (BobKlepko), that is, Line ships have large structures, thus radar scattering is mainly concentratedin the middle part of the ship while Merchant ships scatter mainly from the ship end-regions.

In our implementation, the ship category discrimination is based on the number of importantscatterers (the 10% most intense pixels in the image) in equal sections of the segmentedtarget. Separating lines between various sections are perpendicular to (1) the (slant-) rangeaxis for ISAR image and (2) the ship centreline for SAR images; the latter requiring anestimate of the width-wise ship apparent axis.

The number of sections has been set to nine (Klepko, 1995). Different numbers were tried(for 0.75 m (slant-) range resolution imagery), but not enough structural detail was definedwith less than nine and more information than was necessary to distinguish the general shapeswas achieved with more than nine.

Page 32: Airborne application of information fusion algorithms to

20 DRDC Valcartier TR 2004-282

The spatial distribution in these nine ship sections are analyzed by a set of seven production(knowledge-based) rules (which split into 37 if-then-else ) originally stated in (Klepko,1995) (Table 5, Column 2), and improved by LM Canada to include evidence measures(Table 5, Columns 3-5). These rules were created by browsing through the entire Jane'sFighting and Merchant Ships references (Jane s Information Group, various years) to seewhat the variety of superstructures looked like and what the general appearance of eachcategory was. The validity of the original rules was tested on XDM images in 1995 (Klepko,1995). At that time, a 100% confidence level was assigned when a rule was fired. It turnedout that ship category declarations were correctly assigned approximately 80% of the time.

In addition, from the 10 original rules (seven of which are actually used, as mentionedearlier), only the ones referring to relative scatterers numbers among ship sections have beenretained. Discrimination based on absolute scatterers information is much more sensitive toimage quality, which makes them less reliable.

Table 5. Production rules used to train the ship category Neural Net

Empirical Evidence(%)

Rule # Statement Line Merchant ?? Comments1 If the first three ship sections

with more radar scattering are 1,2 and 3 (in any ordering) then

ship is Merchant

5 90 5

2 If the first three ship sectionswith more radar scattering are 7,

8 and 9 (in any ordering)then ship is Merchant

5 90 5

3 If the first three ship sectionswith more radar scattering are 4,5 and 6 (in any ordering) then

ship is Line

90 5 5

4 If the first four ship sectionswith more radar scattering are 1,2, 3 and 4 (in any ordering), andsection 4 is not the first or

second, and section 3 is not thefirst then ship is Merchant

10 80 10

5 If the first four ship sectionswith more radar scattering are 6,7, 8 and 9 (in any ordering), andsection 6 is not the first or

second, and section 7 is not thefirst then ship is Merchant

10 80 10

6 If the first four ship sectionswith more radar scattering are 1,2, 8 and 9 (in any ordering) then

ship is Merchant

10 80 10

7 - - - - not used8 If three or four of the first four

ship sections with more radarscattering are 3, 4, 5, 6 and 7(in any ordering) then ship is

Line

70 15 15

Page 33: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 21

Empirical Evidence(%)

Rule # Statement Line Merchant ?? Comments9 - - - - not used10 - - - - not used

otherwise 100

However, the ship category declaration was improved by modulating the rules using athree-layer back-propagation neural net. Here, modulating means training the neural netusing the rules themselves, leading to a kind of KB supervisor. The advantage of doing so,rather than using the rules themselves to identify the target, is twofold. First, the neural netsmoothes out the rules (i.e., artificially increases the number of rules), which makes them

less sensitive to small distribution variations along the nine ship sections (recall that theoriginal rules are not exhaustive). For instance, it prevents a category declaration switchingfrom Merchant to Line with only one scatterer difference within one section. Second, itprovides a mapping between the infinite possibilities of scatterer distributions and theassigned evidences (confidence level) for propositions. In the implementation, the sum of theneural net outputs is normalized to 1 and the confidence level for each ship categoryproposition (including ignorance) has been empirically set to (0.98 OL , 0.98 OM , 0.98 O?? +0.02), where OL , OM and O?? are the normalized Line, Merchant and Unrecognized neural netoutputs, respectively. The 0.02 number is to avoid limiting the adaptivity of the subsequentfusion algorithm by a 0% confidence level on the ignorance.

Because of the limited availability of real and simulated image data, a set of nine-dimensionalvectors simulating scatterer distributions was generated and used to train and test a three-layerback-propagation neural net (Figure 9). In order to restrict the number of vectors whilehaving a fairly uniform sample distribution, the set was limited to all combinations (withrepetitions) of numbers 0, 4, 8, 12 and 16, in 9 entries and for which the sum of all entries is40. This results in 16,105 samples that are further normalized such that the sum of all entriesis 1.

Page 34: Airborne application of information fusion algorithms to

22 DRDC Valcartier TR 2004-282

Figure 9. Backpropagation Neural Net architecture used for ship category determination

According to the KB supervisor, from the 16,105 vectors, 8100 were associated with Lineships, 2939 with Merchants and 5066 were not recognized. These ratios are typical of amilitary surveillance mission for the CP-140.

The neural net training and testing have been done with the Stuttgart Neural Net Simulator(SNNS). Once the neural net is trained, a tool in SNNS automatically generates a C-code(with no copyright restriction), which can be linked to the rest of the program. Results will beshown in section 3.2.

3.1.3.2. Line ship typeIf the confidence level on a Line declaration is high enough (let s say > 50%), then anestimate of the Line ship type is initiated. This is performed using a Bayes classifier based onthe Frigates, Destroyers, Cruisers, Battleships and Carriers length distributions. Ship lengthshave been obtained by browsing Jane's Fighting Ships and their probability densitydistributions approximated by Gaussians (Figure 10).

Page 35: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 23

Ship Length (m)

Frigates

Destroyers

Cruisers

Battleships

CarriersProb

abili

ty d

ensi

ty fu

nctio

ns

Figure 10. Probability density distribution model for line ship length

The same equations that are shown in section 2.4 are used for these approximate Gaussiandistributions. In our tests, equal probability for the five Line ship types is assumed. Mean aposteriori type probabilities are re-normalized, in order that their sum is unity, and multipliedafterwards by Ocategory. The resulting number Otype serves to set the confidence level on shiptype declarations which have been empirically chosen as 0.98 Otype. Note that the Merchantand Unrecognized category confidence levels are not modified.

The question now arises as to whether length could discriminate between categories, namelyLine combatant (in French militaire) vs. Merchant (in French marchand), or betweenMerchant types. The following two figures show that this cannot happen, hence the chosendesign was optimal. Both Figure 11 and Figure 12 show so much overlap between categories(Figure 11) or Merchant types (Figure 12) that length cannot be useful as a uniquediscriminator. One can conclude length can only be one of many inputs to a classifier forMerchant ships, be it for FLIR or SAR imagery.

Page 36: Airborne application of information fusion algorithms to

24 DRDC Valcartier TR 2004-282

Figure 11. Length distribution of line combatants versus merchant ships

Page 37: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 25

Figure 12. Length distribution for each merchant type

Since length is not a unique discriminator for Merchants, no Bayes classifier can beconstructed and length can only be one of the many inputs of a more complex Merchantneural net classifier. Therefore, a two-layer neural net was trained on 210 Merchant platformswith input features such as length and a 9-bin decomposition of strong scatterers from SARside view (and/or front view), as estimated from profiles from Jane s Merchant Ships (Valin,Tessier & Jouan, 1999). The outputs are the five possible Merchant types: Cargo/Container(C), Oiler/Tanker(T), Passenger (P), Ferry (F) or Roll-on Roll-off ships (R), in analogy withthe five Combatant types. The resulting confusion matrix on a test set of 38 differentunknown Merchant ships is shown in Table 6 (rows are inputs and columns are outputs).

Table 6. Confusion matrix for merchant ship type

C R T P FC 67 60 33 - -R - 20 - - 17T 25 20 67 11 -P 8 - - 67 -F - - - 22 83

This table shows an adequate recognition rate for all but RoRos, which have profiles showingtremendous variability. Since each column has, at most, three non-zero entries, a complexdeclaration consisting of three propositions for Merchant type will always contain the correcttype. Furthermore, when tested on a restricted set of simulated Merchant SAR imagery of sixCargo ships (one of the most common platforms), the primary recognition rate was 5/6.

Page 38: Airborne application of information fusion algorithms to

26 DRDC Valcartier TR 2004-282

3.1.4. Step 4: Ship class declarationStep 4 of the ISM aims to further refine the target identification up to the ship class (e.g.,Mackenzie-class frigate, Belknap-class cruiser). Obviously, this necessitates much moresophisticated classifiers. Rather than training a single huge classifier that would mostprobably lead to convergence problems, a modular approach was selected, that uses smalldedicated classifiers (neural net sub-modules), each of them specialized in recognizing asubset of the ship database under a small viewing angle range.

This approach is advantageous for large databases, as it avoids training a large classifier whena new ship is entered in the database (a typical ship database for surveillance activities mightcontain up to 1000 ships, imaged at 360 different aspect angles and 10 depression angles,namely 3.6 million images).

During the recognition process, only modules corresponding to the acquisition geometry (Step1) and most probable ship types (Step 3) will be launched. Ship type information could alsobe obtained from the fusion system, which has the capability of providing a list of the mostprobable target types.

Following recommendations of a previous study done by Queens for LM Canada, it isproposed to feed each neural net sub-module with multi-resolution features extracted directlyfrom the SSAR ship image. This approach was shown to offer a good trade-off betweensystem performance and training time (Osman et al., 1997). As shown in Figure 13, featuresare extracted from three regions on the ship target: the bow, stern and central regions. Thecentral region is centred on the Centre of Mass (CM) of the binary segmented target (Step 1).The bow and stern regions are centred on a pixel lying on the ship centreline (Step 2).

Page 39: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 27

xx

xshipclass

Back-Prop NNInput Image Features Extraction

||.||

W.T.

W.T.

W.T.

||.||

Figure 13. Design of the Neural Net submodules (feature extraction and node connections)

The three sub-images (typically 32x32 pixels) are encoded with a discrete Wavelet Transform(WT) down to the level where the image blocks are of dimension 2x2 pixels. The three high-band blocks (the so-called Low-High (LH), High-Low (HL) and High-High (HH) bands) ateach decomposition level are fused into one image, that is the Euclidean amplitude of thethree pixels. The 2x2 low-band image, as well as the amplitude images (2x2, 4x4, 8x8, etc.)constitute the features to be sent to the neural net. This multi-resolution feature extractiontechnique is reminiscent of the Gabor Jets approach used, for instance, in face (Konen &Schultze-Kruger, 1995) and IR target (Hecht-Nielsen, 1990) recognition systems. The neuralnet itself is a standard back-propagation neural net with one hidden layer and a number ofoutput neurons that corresponds to the number of ship classes. The number of hidden neuronsshould be optimized during training.

It was decided not to pursue further this sub-class for two main reasons. The first one is thatthe ship type is often quite enough for ID when combined with other intelligent sensors suchas the ESM, as will be demonstrated in the third report of this series. The second reason is amore pragmatic one, namely the sheer size of the endeavour, in view of the fact that it mayproduce little in return for the effort deployed. Since this step was halted at the design stage,further details need not be provided.

Page 40: Airborne application of information fusion algorithms to

28 DRDC Valcartier TR 2004-282

3.2. Tests

3.2.1. Tests on simulated imagesUse of a computer-based ship SAR/ISAR image simulation facility enables an extensivedatabase of images to be generated. Numerous images can be produced for variouscombinations of ship motions and orientations. This simulation facility provides a very cost-effective method for acquiring radar images when compared with the alternative methods ofacquiring real images, using either full-size vessels or scaled-down ship models. It alsoenables extensive testing of automatic ship classification algorithms.

A Computer-Aided Design (CAD) modeller and a radar scattering simulator developed byLondon Research and Development for DRDC-O was used. It is assumed that the ship is arigid body (no flexion or vibration of hull), which can be modelled as untextured plates,corners and corner cubes. The CAD modeller uses a two-dimensional block-based method togenerate a numerical representation of ship structures, as they appear above the waterline.CAD models for the 46 ships listed in Table 7 were provided by DRDC-O.

Table 7. Ship database used for simulated imagery

Category Type Class #Lines Frigates Boxer, Oliver Hazard Perry, Knox, Bremen, Amazon, Grisha, Krivak,

Mackenzie, Ste-Croix, Terranova, Mirka11

Destroyers Adams, Coontz, Spruance, Iroquois, Kotlin Sam, Udaloy, Sovremenny 7Cruisers Belknap, Longbeach, Virginia, Sverdlov/Dzerzhinski, Kara, Ticonderoga, Kresta 7

Battleships Kirov, Moskva 2Carriers Kiev, Nimitz, Invincible, Tarawa 4

Merchants Cargos Donato Marmol, Geestbay 2Containers Sealand Freedom, Sydney Express 2

BulkCarriers

Farland, Radnik 2

Others Supply Sacramento, Preserver, Boris Chilikin, Ivan Rogov, Ugra 5Tug John Ross 1

Navigation Sir William Alexander 1Research Quest 1Trawler Trawler 1

The SAR/ISAR simulator, called SARSIM2, uses a physical optics approximation for theRCS estimation. Many modifications to this simulator have been performed by LM Canadain order to improve its usefulness.

First, three artificial image degradation algorithms have been added in order to generateimages of more realistic appearance. These are:

a. Pixels spreading (local random swapping of pixels),

b. Pixel blurring to simulated non-ideal detector response,

c. Speckle noise (a multiplicative noise following a Gamma distribution withNumber of Looks = 3).

Page 41: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 29

These degradations are performed sequentially over a local 3x3 window, just after the RCSsimulation and before saving the image to disk. The degradation parameters have been setempirically such that simulated image texture qualitatively resembles XDM image texture.

Second, the size of many memory buffers was increased in order to allow generation of high-resolution imagery (0.75 m slant-range resolution in our tests). In addition, simulated imagesare coded in raw PGM format (1 byte/pixel) and only the ROI, delimited by the (known) shiplength, is written to disk in order to save disk space. Figure 14 shows an example of asimulated SAR image for the Merchant ship Radnik (a bulk Carrier as shown in Table 7) at45° aspect angle, but with other acquisition parameters shown in Table 8.

Table 8. Simulation parameters used for the tests

TargetDistance(km)

AspectAngle (H)(°)

(Slant-) RangeResolution(m)

Cross-RangeResolution(m)

AircraftAltitude(m)

AircraftVelocity(km/s)

RadarFrequency(GHz)

100 ±30, ±80 0.75 2 3000 0.15 10

Figure 14. An example of a simulated SAR Image using the modified SARSIM2 simulator

In the following paragraphs, the performance of Steps 1 through 3 of the classifier ispresented, for the radar, target and aircraft simulation parameters given in Table 8.

Table 9, Table 10 and Table 11 give the confusion matrices for ship category and ship typedeclarations cumulated over the four ship aspect angles of Table 8. The Rejected columnrefers to the number of images that did not pass the minimum ship length or eccentricity tests.For the category (Table 9) and 1-proposition type (Table 10) declarations, the winningdeclaration is the one that has the highest confidence level. For the 2-proposition type (Table

Page 42: Airborne application of information fusion algorithms to

30 DRDC Valcartier TR 2004-282

11) declarations, the winner is the one corresponding to the sum of the two highest confidencelevels. Here are a few remarks about results depicted in those three tables.

a. A rejected image cannot really be considered a totally incorrect declaration.

b. An Unrecognized declaration cannot really be considered a totally incorrectdeclaration for Line and Merchant ships because the KB rules that served totrain the neural net are not exhaustive.

c. 17% of the images have been rejected (14% for Lines, 14% for Merchantsand 31% for Others).

d. 4% of Lines have been incorrectly classified (Lines declared as Merchants).

e. 50% of Merchants have been incorrectly classified (Merchants declared asLines).

f. Clearly, the system is more robust to recognize Line ships than Merchants.

However, 43% of Merchants and Others have been declared as Lines while 11% of Lineshave been declared as Merchants or Others. This can be interpreted as a tendency of thesystem to overestimate the target threat, which is not necessarily a drawback for militarysurveillance platforms

g. Table 11 shows that the system is not very robust for 1-proposition ship typedeclarations (45% for Frigates, 50% for Destroyers, 68% for Cruisers, 37%for Battleships and 42% for Carriers).

h. However, 2-proposition ship type declarations (Table 11) are more faithful(77% for Frigates, 96% for Destroyers, 82% for Cruisers, 62% forBattleships, 75% for Carriers). Even though more generic, such declarationsare still very useful within the current sensor fusion environment.

Table 9 Confusion matrix for ship category declarations

Total Rejected Sub-Total Lines Merchants UnrecognizedLines 124 17 107 95 4 8Merchants 28 4 24 12 4 8Others 32 10 22 8 6 8

Table 10 Confusion matrix for 1-proposition ship type declarations

Total Rejected Sub-Total F D C B A M∨ UFrigates 44 13 31 14 13 - - - 4Destroyers 28 - 28 3 14 11 - - -Cruisers 28 - 28 - 5 19 - - 4Battleships 6 - 8 - - 3 3 1 1Air. Car. 16 4 12 - - 4 - 5 3

Page 43: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 31

Table 11 Confusion matrix for 2-proposition ship type declarations

Total Rejected Sub-Total F∨D D∨C C∨B B∨A C∨A M∨UFrigates 44 13 31 24 3 - - - 4Destroyers 28 - 28 7 20 - - 1 -Cruisers 28 - 28 1 12 1 - 10 4Battleships 6 - 8 - 1 1 4 1 1Air. Car. 16 4 12 - - - 4 5 3

For complementary information, Table 12 gives the detailed ship declarations for the datasubset H = +80°. Confidence levels for ship type do not sum to 100% because of the residualweighted confidence levels for Merchant and Unknown categories (not shown). The firstcolumn gives the ship class and the real ship type and length (in metres). The second columnis the estimated ship length range. Columns 3 to 10 give the confidence levels of the variouscategory and type declarations.

Table 12. Detailed ship length estimation (metres) and confidence level declarations (%) for data subsetH = +80°. Boldface indicates correct declarations.

Length L M Unrecog. F D C B Air. Car. Commentsadams (D-133) 131-169 84 7 9 12 52 17 2 2amazon (F-117) 80-112 83 7 10 76 2 2 2 2belknap (C-167) 136-174 85 7 8 8 49 24 2 2boxer (F-145) 142-182 49 9 42 4 23 22 2 2bremen (F-130) 108-143 68 5 26 42 22 3 2 2chilikin (O-162) 158-198 2 87 11 - - - - -coontz (D-156) 131-168 85 7 8 13 53 16 2 2donato (M-145) 120-156 3 82 15 - - - - -farland (M-272) 253-306 3 78 20 - - - - -geestbay (M-159) 151-192 7 33 60 - - - - -grisha (F-73) 69-100 45 9 46 41 2 2 2 2 Small lengthinvinc (A-206) 184-228 83 7 10 2 2 58 3 20irogov (O-159) 155-197 6 50 44 - - - - -iroquois (D-130) 109-144 83 6 10 49 28 3 2 2Jross (O-95) 74-105 51 11 38 - - - - - Small lengthkara (C-174) 167-209 19 17 64 2 3 16 2 3kiev (A-270) 260-312 52 9 39 2 2 2 16 33Kirov (B-248) 230-279 65 9 26 2 2 3 40 21Knox (F-133) 124-161 68 5 27 18 42 7 2 2kotlin (D-126) 123-159 80 6 15 23 48 7 2 2kresta (C-158) 148-189 83 9 8 3 29 46 2 3krivak (F-122) 105-141 33 46 21 22 10 2 2 2longbeac (C-220) 200-247 6 39 55 2 2 4 3 3mackenzie (F-112) 76-107 68 5 27 63 2 2 2 2 Small lengthmirka (F-81) 71-101 84 7 9 77 2 2 2 2 Small lengthmoskva (B-197) 145-186 4 11 85 2 3 3 2 2nimitz (A-332) 36-63 47 9 44 43 2 2 2 2 Bad segment.perry (F-135) 117-153 75 8 17 31 38 4 2 2preserve (O-172) 132-170 81 8 11 - - - - -quest (O-77) 75-106 7 45 48 - - - - -radnik (M-189) 179-223 21 12 67 - - - - -sacramen (O-241) 235-286 5 50 44 - - - - -sealand (M-227) 199-244 80 7 13 - - - - -sirwalex (O-69) 62-92 76 7 17 - - - - - Small length

Page 44: Airborne application of information fusion algorithms to

32 DRDC Valcartier TR 2004-282

Length L M Unrecog. F D C B Air. Car. Commentssovrem (D-155) 151-191 84 6 9 3 26 51 2 3spruance (D-171) 166-208 83 6 11 2 7 67 2 6stecroix (F-112) 81-113 3 83 14 3 2 2 2 2sydney (M-210) 207-252 85 7 9 - - - - -tarawa (A-249) 252-302 9 30 61 2 2 2 5 5terranova (F-113) 76-108 61 5 35 56 2 2 2 2 Small lengthticonder (C-171) 157-197 72 7 21 2 15 51 2 4trawler (M-145) 38-65 3 67 30 - - - - - Small lengthudaloy (D-158) 141-181 85 7 8 5 41 35 2 3ugra (O-141) 130-168 85 7 8 - - - - -virginia (C-178) 156-198 85 7 8 3 18 60 2 4zerzinsk (C-210) 174-217 85 7 8 2 3 69 2 10

3.2.2. Test on XDM imagesThe XDM data set consists of a sampling of SSAR and ISAR imagery. The data set isapproximately the same as the one used in DRDC-O report #1283. The acquisitionparameters (DND classified and not reproduced here) have also been provided but are notcomplete. In particular, ship heading is not known for all images. Table 13 summarizes theoutput of Steps 1 to 3 of the classifier for the XDM data set.

Table 13 Detailed ship length estimation (metres) and confidence level declarations (%) for the XDMimages set. Boldface indicates correct declarations.

Ship(S): SSAR

(I,#): ISAR, #frames

TrueLength

TrueCat/Type

EstimatedLength

EstimatedCategory(L-M-??)

EstimatedType

Comment

farland (S) 272 M 253-304 17-14-69 - -preserver (S) - O - 3-67-30 - Unkn.

Head.mackenzie (S) - L/F - 61-9-30 - Unkn.

Head.preserver (S) - O - 5-52-43 - Unkn.

Head.sealand (S) 227 M 195-239 3-23-74 - -

mackenzie (S) - L/F - 82-6-13 - Unkn.Head.

mackenzie (S) - L/F - 79-5-15 - Unkn.Head.

mackenzie (S) - L/F - 11-19-70 - Unkn.Head.

radnik (S) - M - 84-5-10 - Unkn.Head.

radnik (S) - M - 11-18-71 - Unkn.Head.

donato marmol (S) - M - 2-79-19 - Unkn.Head.

perry (S) 135 L/F 119-154 85-6-9 F33-D48 -knox (S) - L/F - 8-28-64 - Unkn.

Head.

Page 45: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 33

Ship(S): SSAR

(I,#): ISAR, #frames

TrueLength

TrueCat/Type

EstimatedLength

EstimatedCategory(L-M-??)

EstimatedType

Comment

fraser (S) - L/D - 86-5-9 - Unkn.Head.

mackenzie (I, 4) 112 L/F 108-149 87-5-8 F50-D36daewoo spirit (I, 5) - M - 11-42-47 - Unkn.

Head.knox (I, 3) - L/F - 85-6-9 - Unkn.

Head.knox (I, 3) - L/F - 51-7-42 - Unkn.

Head.perry (I, 3) - L/F - 75-11-14 - Unkn.

Head.fraser (I, 3) - L/F - 82-4-14 - Unkn.

Head.knox (I, 3) - L/F - 74-9-17 - Unkn.

Head.protecteur (I, 3) - O - 85-5-10 - Unkn.

Head.protecteur (I, 5) - O - 31-21-48 - Unkn.

Head.fraser (I, 15) 115 L/D - - - Rejected

terranova (I, 15) 113 L/F 85-125 84-6-10 F82-D3 7/15Rejected

preserver (I, 8) 172 O 127-181 7-49-44 - -spruance (I, 16) 171 L/D 93-162 64-10-26 F33-D27 -spruance (I, 16) 171 L/D 102-179 68-8-24 F24-D36 -terranova (I, 16) 113 L/F 81-125 84-6-10 F82-D5 8/16

Rejectedterranova (I, 1) 113 L/F 97-130 84-8-8 F76-D8 -

terranova (I, 16) 113 L/F 81-131 68-11-21 F66-D5 4/16Rejected

mackenzie (I, 13) 112 L/F 84-130 86-5-9 F83-D3 8/13Rejected

mackenzie (I, 3) - L/F - 85-4-11 - Unkn.Head.

mackenzie (I, 1) - L/F - 86-5-10 - Unkn.Head.

terranova (I, 3) 113 L/F 86-134 86-5-9 F80-D5 -terranova (I, 1) 113 L/F 100-134 86-6-8 F73-D13 -terranova (I, 1) 113 L/F - - - Rejectedpreserver (I, 3) 172 O 149-193 83-5-12 - -preserver (I, 2) 172 O 146-202 85-5-12 - -

fraser (I, 1) 115 L/D 88-120 86-5-9 F85-D1 -terranova (I, 1) 113 L/F - - - Rejected

Below are a few remarks about the results presented in Table 13:

a. SSAR images are labelled by an (S). These are all single-frame images.

b. ISAR images are labelled by (I) along with the number of frames in thesequence.

Page 46: Airborne application of information fusion algorithms to

34 DRDC Valcartier TR 2004-282

c. During the tests, ship length estimated from cross-range resolutions (onSSAR image only) appeared to be unreliable. However, ship lengthcalculated using ship heading (when available) provided better results. Thisseems to indicate inconsistency in DRDC-O s parameter table or betweenimages and parameters. In view of this fact, all images were analyzed asISAR ones; thus, ship length has been estimated only from images withknown ship heading.

d. For multi-frame ISAR sequences, estimated ship length range is the minimumand maximum length measured on the subset of frames which pass theminimum ship length and eccentricity tests. Confidence levels on shipcategory and type are averaged over the same frame subset.

e. XDM images set is composed of 10 images of Merchants, 146 images ofLines and 21 images of Others.

f. 17% (30/177) of the images have been rejected.

g. 29% (43/147) of the non-rejected images have unknown ship heading.

h. 15% (16/104) of the non-rejected images with known heading yield wrongship length estimate.

i. 18% (27/147) of the non-rejected images yield wrong ship category estimate.

j. 0% (0/101) of the non-rejected images of Lines have been incorrectlyclassified (Lines declared as Merchants).

k. As for test results from simulated images, Merchants discrimination is lessrobust. However, a fair 10% (1/10) of ship images of Merchants have beendeclared Lines, which is much better than the 50% obtained on simulatedimages. The rest (9/10) have been declared as Merchants (10%) or Others(80%).

l. 24% (18/76) of the correctly identified Line ship images with known shipheading yield wrong 1-proposition ship type estimate.

m. 0% (0/76) of the correctly identified Line ship images with known shipheading yield wrong 2-proposition ship type estimate.

A comparison of these results with the ones obtained from simulated images shows that theclassifier performance on real images is as good as, and sometimes even better than,performance on simulated images. This could be due to the poor scattering properties of thegross CAD modelling used to simulate radar images. Steps 2 and 3 have been refined further(Lefebvre, 1999) along two major directions:

a. First, by using a larger NN trained with the set of rules on randomly chosen9-bin profile vectors and testing on real imagery;

Page 47: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 35

b. Second, by refining and benchmarking the Bayesian type classifier based onlength, for various a priori probabilities and testing on real imagery.

Each of the following subsections describes first the selection of data for training, validationand testing, namely:

a. 9-bin profile vectors for the category NN,b. Database of 145 ships for the Bayesian type classifier,

and then the performance on real imagery.

3.3. Neural net for determining category

3.3.1. Category NN training, validation and testing on profilevectors

For this entire exercise, 32,211 profile vectors were selected, which according to theknowledge based rules of Table 5, correspond to the following ship distribution:

a. 16,259 Line combatantsb. 5922 Merchant shipsc. 10,030 others.

It is hoped that such a distribution of SARSIM imagery is representative of the a prioridistribution of a typical mission. The 9-bin profile vectors were normalized such that the sumis 1 and no given cell exceeds 0.4. Following Hornik et al. (1989), in which it is claimed thata hidden-layer NN can classify any problem as long as enough neurons are taken, one haschosen to optimize a single-layer NN through varying the number of hidden neurons.

The next step is to vary the training set size given a fixed (but randomly chosen) validation setof 1000 profile vectors. The values used for the training set size were thus varied through100, 200, 500, 1000, 2000, 5000, 10,000. Figure 15 shows two examples of training andvalidation set curves, on the left for 6 neurons and on the right for 19 neurons, on the top for atraining set of 100 and on the bottom for a training set of 5000. In each sub-figure, the topcurve shows the generalization error and the lower curves the learning error.

The upper left and right sub-figures, which correspond to 100 training examples, show amarked difference between the two curves, which is an indication that the system hasmemorized the examples. One can conclude that 100 examples for training is too little.However, the lower left and right sub-figures, which correspond to many more examples(5000), have very similarly shaped curves and the generalization curve (upper curve) flattensout rather than increasing, which is the desired behaviour.

Page 48: Airborne application of information fusion algorithms to

36 DRDC Valcartier TR 2004-282

Figure 15 Training set and validation set curves for 100 or 5000 examples and 6 or 19 neurons

Next, one can follow the evolution of the generalization error as a function of the number ofhidden neurons for a given fixed number of test examples, say, 1000. This is shown in Figure16.

Page 49: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 37

Figure 16. Generalization error on a fixed test sample vs. the number of hidden neurons

As expected, increasing the training set decreases the generalization error at a fixed number ofhidden neurons. Furthermore, for each of the curves the generalization error decreases as thenumber of neurons increases, also as expected.

3.3.2. Validation of category NN on real imageryThe best category NN had many hidden neurons (29) and was trained on relatively few (100)examples. This NN seemed to yield the best compromise between approximating the KBrules, whose performance is shown in Table 14, and good overall performance, shown inTable 15. The Merchant recognition rate is poor for both cases, however.

Table 14. Performance of the Knowledge-Based rules by category

Line combatant Merchant Other

Line combatant 105 1 5

Merchant 2 3 5

Other 9 7 7

Table 15 Performance of the NN by category

Line combatant Merchant Other

Line combatant 109 1 1

Merchant 4 3 3

Other 13 4 6

Page 50: Airborne application of information fusion algorithms to

38 DRDC Valcartier TR 2004-282

4. FLIR ISM and its stand-alone performance

The FLIR imagery, which was procured from unclassified sources, does not overlap in shipcontent with the SSAR imagery, whether the latter be real or simulated. There is therefore noneed for a common design between the ISMs. Since the FLIR is a passive sensor, it cannotprovide range information, without which ship length cannot be defined. Indeed, the procuredimagery did not even record the zoom factor from which a length determination would havebeen indirectly feasible. Let us recall that ship length was a crucial element of the SSARBayes classifier component. In a real situation where the FLIR is cued to a track, range (andbearing) information immediately become known and a Bayes length classifier should beattempted. The unfortunate fact is that the FLIR imagery procured was not from actual flightsof the Aurora and little information was provided along with the imagery.

4.1. Feature selectionThe FLIR images are taken from different acquisition angles, from different zoom factors, andare not necessarily centred in the frame. For the FLIR, it is therefore essential to findattributes (or features) that are invariant under the three transformations of rotation, scalingand translation.

Park and Sklansky (1990) extracted 11 attributes when they developed an automated design oflinear tree classifiers for ship recognition using the same FLIR data consisting of 2545images. These 11 attributes consist of seven invariant moments under scaling (to account fordifferent zooms), rotation (because of different ship headings) and translation (since the shipimage is not necessarily centred) that account for general features of the ships, and four auto-regressive parameters that provide more detailed target information.

The seven invariant moments were originally given by Hu (1962), and are built from thesecond and third order moments given by:

∑∈

−−=Sy)(x,

mnnm )y(y)x(x

where the order of the moment is (n+m), and x (y) denotes the horizontal (vertical)coordinates, in the silhouette S, and where x and y are the coordinates of the centroid of S.

The actual seven moments mi are:

m1 = r/B

m2 = {(µ20-µ02)2+4µ211} / r4

m3 = {(µ30 3µ12)2+(3µ21-µ30)2} / r6

m4 = { (µ30+µ12)2 + (µ21 + µ03)2} / r6

m5 = {(µ30 3µ12)2 (µ30+µ12) [(µ30+µ12)2 3(µ21+µ03)2] + (3µ21-µ03)2(µ21+µ03)[3(µ30+µ12)2

(µ21+µ03)2]} /r12

m6 = {(µ20-µ02)[(µ30+µ12)2 (µ21+µ03)2] + 4µ11(µ30+µ12)(µ21+µ03)} / r8

Page 51: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 39

m7 = {(3µ21-µ03)(µ30+µ12)[(µ30+µ12)2 3(µ21+µ03)2] (µ30 3µ12)(µ21+µ03) [3(µ30+µ12)2

(µ21+µ03)2]} / r12

where r denotes the radius of gyration, and B denotes the distance of the object from thecamera.

The weakness of choosing invariant features is that complete invariant features contain onlyinformation on the global shape of a given ship and represent poorly the details of the object.To overcome this disadvantage, another set of features was extracted by fitting an Auto-Regressive (AR) model to the one-dimensional sequence of the projected image along thehorizontal axis.

The projection of a ship image onto the horizontal axis usually preserves the shape of the shipimage, provided that the major axis of the ship is parallel to the horizontal axis. If r(i), i=1, 2,

, N denotes the sequence of the projection sampled at N equally spaced points along thehorizontal axis, an AR model can be constructed that expresses r(i) as a linear combination ofthe previous projections r(i-j), j = 1, , m, plus a bias α, and the error ε(i) associated with themodel, according to the equation:

∑=

++−=m

1jj (i)j)r(ir(i)

The parameters are estimated by a least squares fit of the model to the one-dimensionalsequence r(1), , r(N). The least square estimates approximate the maximum likelihoodestimates. Let θi, α and β now denote the least squares estimates of θi, α and β respectively.Thus the complete feature vector A of an image consists of seven invariant moments

Ai = mi for i = 1, 7

and four AR parameters,

Ai+7 =θi , for i = 1,2,3, and A11=α/√β

All these parameters have also been shown to be invariant to rotation, translation, and scaling,so that these can indeed be used as a feature vector for the purpose of classification.

4.2. Desired output classesThe FLIR imagery that was procured was accompanied by ground truth information as to theship type that was imaged. The desired output classes are therefore shown below, togetherwith the actual number of ship images for each class (Tremblay & Valin, 2003)

1. Destroyer (D) with 340 images

2. Container (CO) with 455 images

3. Civilian Freighter (CF) with 186 images

4. Auxiliary Oil Replenishment (AOR) with 490 images

5. Landing Assault Tanker (LAT) with 348 images

6. Frigate (F) with 279 images

Page 52: Airborne application of information fusion algorithms to

40 DRDC Valcartier TR 2004-282

7. Cruiser (CR) with 239 images

8. Destroyer with Guided Missile (DGM) with 208 images

Typical silhouettes for the best imagery from the eight classes are shown in Figure 17 below,with the eight classes being from left to right and top (1 through 4) to bottom (5 through 8).

Figure 17. Typical imagery for the eight classes.

4.3. Fusion approachThe FLIR imagery just shown is of poor contrast compared with SSAR imagery. It should beexpected that different reasoning frameworks would perform quite differently whenconfronted with such imagery.

However, in the last decades, fusers for different classification approaches (differentclassifiers) have been developed (Kittler & Roli, 2000). Considerable gains have beenachieved in the classification performance by fusing and combining different classifiers.Multiple classifiers are a solution to get higher performance in terms of recognition rate andreliability. The advantage of using them is that they make decisions by consideringinformation coming from individual classifiers that do not behave in the same way and thatcan complement each other. Some classifiers will perform well when others perform poorly,and then it is obvious that there are more chances to find the correct answer among severalclassifiers than only one.

Indeed, Rao (2001 and 2002) has demonstrated that individual results can be fused in order toobtain a more reliable decision, and he has also demonstrated that a fuser can be guaranteed toperform at least as well as the best classifier under certain conditions.

The objective of any good fuser is to perform at least as well as the best classifier in anysituation. To this end, two different fusers were considered, using two different reasoningframeworks

1. NN, and

2. DS

and four different classifiers based on four reasoning frameworks applied to classifiers:

1. Bayes

2. NN

Page 53: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 41

3. DS, and

4. k-nearest neighbours.

In the first fusion method, three classifiers are considered (Tremblay & Valin, 2002):

1. DS,

2. Bayes and

3. k-nearest neighbours

and fused by a feed-forward NN fuser. This work is closely related to Rao s (2002) work, butin a more practical way, since the results of the classifiers are compared with the results of thefusion of two or three classifiers using an NN fuser. The NN fuser, in the limit of a very largesample data set of images, obeys Rao's conditions for guaranteed improved performance. Inpractice, however, one is far from having enough images, so one can only hope todemonstrate an improvement by experimentation. The fuser is indeed found to give aperformance equal or superior to the best classifier in all cases.

To optimize the results of every class of ship, individual classifiers were implemented usingthe DS method for each class, i.e., an individual classifier returns whether the ship belongs tothe class or not. The result of the generic DS classifier was compared with the results of theindividual DS classifiers. The improvement in recognition varies between 3% and 20% for aclass.

In the second fusion method (Rhéaume et al., 2002), a different set of classifiers

1. Bayes,

2. NN, and

3. k-nearest neighbours,

are combined using DS theory by appropriately defining the weights that best representindividual classifier evidences. This is called the measure-based method because it relies onthe internal information of each rather than statistics.

It should be noted that the NN classifier is not used for the NN fuser, and the DS classifier isnot used for the DS fuser, in order to ensure diversity of reasoning schemes.

4.4. Frequency distributionEach attribute will discriminate different classes to varying degrees. The classifiers, whichwill be described later, have a performance that depends slightly on the discreteness of thebinning scheme used for the attributes. Very precise values of each attribute are neitherdesired nor easily measured. The width of the bins is a compromise between the expectedextraction accuracy of the given attribute from the imagery, and having a representativenumber of classes in each bin of the attribute. Given that a certain image provides values(within a bin) for each attribute, each classifier will use that value in a different manner:Bayesian probability, DS BPA (or mass), etc.

The most convenient way to show the discriminatory power of an attribute for every class isthrough one frequency graph per attribute. Frequency graphs have thus to be made for each

Page 54: Airborne application of information fusion algorithms to

42 DRDC Valcartier TR 2004-282

of the 11 attributes. Such a frequency graph is shown in Figure 18 for attribute 1, which is thefirst entry of the feature vector m1 (Tremblay & Valin, 2002 and 2003).

0

50

100

150

200

250

300

350

400

600

800

1000

1200

1400

1600

1800

2000

2200

2400

2600

2800

3000

>300

0

Type 1 Type 2 Type 3 Type 4 Type 5 Type 6 Type 7 Type 8

Figure 18. Frequency graph for attribute 1 binned in increments of 200.

In Figure 18, the vertical axis represents the number of times that images of each type havethe attribute values on the horizontal axis. The relative lengths of each bar corresponding toeach class (or type) indicates how relatively often the attribute value is obtained for imageryof that class. For example, a value between 1600 and 1800 strongly favours class 4 whilebeing very rare for class 3, while values below 600 can only be representative of classes 1, 4,and 6.

In general, classifiers use all attributes for all classes, earning the name "generic" classifier.When only a selected set of attributes are used for a given class, one will refer to a"specialized" or individual classifier for that class.

Several comments are in order at this time:

• The class (type) varies a lot from bin to bin, making fitting smooth curvesdifficult. Therefore this will not be attempted.

• It can happen, due to the limited statistics in the data, that a given class is notrepresented for an attribute bin sandwiched between bins that are populated by

Page 55: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 43

that class. Such a zero probability would seem accidental, thus a Bayesianclassifier implementation will have to account for this variability in performancedepending on the binning width.

• For certain classes, it may occur that a given attribute is not discriminatory (thatclass can have relatively uniform distribution across all bins of the attribute),hence it may be beneficial to leave that attribute out in certain classifiers.Leaving certain attributes out will thus generate more efficient specializedclassifiers for a given class.

• In all classifier examples below, the data sample is distributed into training andtest sets (varying between 1000 and 1500 examples), and both choices will beshown to provide consistent results.

• The original sample distribution is not uniform across all classes, with class 3least represented and class 4 the most. It should therefore be expected thatclassifiers which can exploit this a priori information (e.g., Bayes) will performbetter. Of course, if the distribution changes in future field exercises, suchBayesian classifiers would introduce a bias.

4.5. Classifiers

4.5.1. DS classifiersWe fused sequentially the eleven attributes with the DS method using the frequency graphs toattribute the masses for each class. At each step there is a lot of conflict, which providesrenormalization of the masses for the fused result for each of the eight classes. After the finalcombination is completed, one returns the class with the highest mass. Table 16 shows the DSclassifier confusion matrix, with Average Identification Rate (AIR) of 74.5%. This AIR issimply the number of correct ship classifications, divided by the number of items to classify.

Table 16. Confusion matrix for the DS classifier with AIR = 74.5%

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8Class 1 0.871 0.006 0.000 0.009 0.006 0.079 0.026 0.003Class 2 0.000 0.864 0.000 0.095 0.042 0.000 0.000 0.000Class 3 0.038 0.081 0.070 0.640 0.172 0.000 0.000 0.000Class 4 0.000 0.037 0.000 0.957 0.006 0.000 0.000 0.000Class 5 0.000 0.182 0.000 0.190 0.629 0.000 0.000 0.000Class 6 0.151 0.007 0.000 0.008 0.047 0.735 0.004 0.047Class 7 0.326 0.113 0.000 0.008 0.050 0.000 0.490 0.013Class 8 0.005 0.010 0.000 0.000 0.010 0.090 0.000 0.885

To improve the results of every class of ship, individual (or specialized) classifiers were alsoimplemented using the DS method for each class, i.e., an individual classifier returns whetherthe ship belongs to the class or not. For every individual classifier, one chooses a subset of

Page 56: Airborne application of information fusion algorithms to

44 DRDC Valcartier TR 2004-282

features, which optimize the performance of the class. The recognition rate of the DS methodincreases, sometimes substantially, for each individual DS:

1. For class 1 (D), from 87.1% to 93.2% (only attributes 1, 3, 6, 9, 10, 11 areused).

2. For class 2 (CO), from 86.4% to 95.8% (only attributes 1, 6, 9 are used).

3. For class 3 (CF), from 7% to 24.2% (only attributes 4, 5, 11 are used).

4. For class 4 (AOR), from 95.7% to 98.4% (only attributes 1, 2, 7, 8, 9, 10, 11 areused).

5. For class 5 (LAT), from 62.9% to 74.7% (only attributes 2, 3, 7, 11 are used).

6. For class 6 (F), from 73.5% to 80.3% (only attributes 2, 4, 7, 11 are used).

7. For class 7 (CR), from 49% to 68.6% (only attributes 1, 3, 5, 11 are used).

8. For class 8 (DGM), from 88.5% to 92.8% (only attributes 2, 4, 5, 10are used).

Clearly, each individual specialized DS classifier gives better results for each class than thegeneric DS classifier.

4.5.2. Additive Bayes classifierBayes classifiers use a probabilistic approach to assign a class. They compute the conditionalprobabilities of different classes given the values of the attributes and then predict the classwith the highest conditional probability.

The equation below represents the probability of an object belonging to the i-th class (Ci)knowing the value of the j-th attribute (Aj), where i represents the number of classes i = {1, 2,

, m} and j the number of attributes = {1,2, , n}.

∑=

= m

i 1iij

iijji

))P(CC|P(A

))P(CC|P(A)A|P(C

One then computes the probability of an object being in class i, knowing the value of j-thattribute (for each attribute), and sums them (rather than taking successive product as in thestrict Bayes rule), resulting in a modified additive Bayes classifier.

∑=

=N

1jjii )A|P(C)P(C

Finally, the class Cj of the object is identified as the class with the highest probability.

)]}[P(CmaxArg{j imii ≤≤=

The confusion matrix for this modified additive Bayes classifier results in an AIR of 77.7%and is shown in Table 17 below.

Page 57: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 45

Table 17. Confusion matrix for the modified Bayes classifier with AIR = 77.7%

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8Class 1 0.767 0.030 0.000 0.000 0.000 0.045 0.124 0.035Class 2 0.000 0.784 0.026 0.101 0.086 0.000 0.004 0.000Class 3 0.018 0.072 0.514 0.288 0.072 0.018 0.018 0.000Class 4 0.003 0.017 0.030 0.922 0.017 0.000 0.000 0.010Class 5 0.010 0.107 0.049 0.039 0.741 0.015 0.010 0.029Class 6 0.164 0.030 0.000 0.000 0.000 0.691 0.018 0.097Class 7 0.110 0.110 0.000 0.000 0.007 0.000 0.706 0.066Class 8 0.009 0.009 0.000 0.000 0.000 0.051 0.000 0.932

It should be noted that the classic Bayes classifier would give the same results as the genericDS classifier in the absence of ignorance, since both use the same statistical inputs (frequencygraphs) to determine either the probability or mass, and that both methods multiply thesequantities and renormalize at each fusion step.

4.5.3. K-nearest neighbours classifierThe k-nearest neighbour (k-NN) classifier finds the k nearest neighbours based on a metricdistance and returns the class with the greatest frequency (majority vote).

A distance weighted by the inverse of the inter-classes covariance matrix Γ was used:

)x(x)x(x)x,(xd 211T

21212 −−= −

The results of the k = 3 nearest neighbours classifier are shown in Table 18. Its high AIR of94.8% is indicative that the 11-dimensional distribution of vectors is well separated for thissmall set of FLIR images.

Table 18. Confusion matrix for the 3-NN classifier with AIR = 94.8%

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8Class 1 0.907 0.000 0.000 0.000 0.000 0.043 0.049 0.000Class 2 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000Class 3 0.000 0.100 0.800 0.060 0.030 0.000 0.000 0.010Class 4 0.004 0.013 0.008 0.971 0.014 0.000 0.000 0.000Class 5 0.000 0.006 0.006 0.011 0.949 0.023 0.006 0.000Class 6 0.000 0.000 0.000 0.000 0.022 0.877 0.007 0.094Class 7 0.062 0.035 0.000 0.000 0.009 0.062 0.832 0.000Class 8 0.010 0.000 0.000 0.000 0.000 0.020 0.000 0.980

Page 58: Airborne application of information fusion algorithms to

46 DRDC Valcartier TR 2004-282

4.5.4. Neural net classifierWe used the following parameters for the neural net classifier: two hidden layers, 50 neuronson the first layer, 30 neurons on the second layer, momentum = 0.5, maximal error = 0.001,epsilon = 0.1, number of maximal iterations = 100. The results are presented in Table 19 andshow an AIR of 92.7%. This architecture was shown in Figure 9.

Table 19. Confusion matrix for the neural network classifier with AIR = 92.7%

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8

Class 1 0.858 0.000 0.000 0.000 0.000 0.080 0.056 0.006

Class 2 0.000 0.987 0.000 0.013 0.000 0.000 0.000 0.000

Class 3 0.000 0.000 0.840 0.150 0.000 0.010 0.000 0.000

Class 4 0.000 0.000 0.008 0.992 0.000 0.000 0.000 0.000

Class 5 0.000 0.000 0.012 0.046 0.943 0.000 0.000 0.000

Class 6 0.029 0.000 0.000 0.000 0.029 0.906 0.007 0.029

Class 7 0.106 0.000 0.000 0.009 0.000 0.000 0.885 0.000

Class 8 0.010 0.000 0.000 0.000 0.000 0.030 0.020 0.940

4.6. FusersHaving four distinct classifiers at our disposal, they will be fused in two different ways:

1. Using an NN fuser for Bayes, DS and k-nearest neighbour classifiers,

2. Using a DS fuser for Bayes, neural net and k-nearest neighbour classifiers.

By deliberately not using a neural net classifier with a neural net fuser, nor DS classifiers witha DS fuser, it is hoped to preserve the best features of each approach, and take full advantageof the variety of techniques employed.

4.6.1. Results of the first fuser approach with a neural netLet us recall the classification results of each method:

• for the modified Bayes, an AIR of 77.7%,

• for the generic DS, an AIR of 74.5%,

• for the 3-nearest neighbour, an AIR of 94.8%.

The results of two or three classification methods were then fused with a feed-forward back-propagation neural network fuser. The neural network fuser has 16 or 24 inputs (these inputsare the results of selected subsets of two or three classifiers) and has eight outputs, one for

Page 59: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 47

each class, with all other parameters fixed to the ones of the neural net classifier. The fuserwas trained with 1000 data and tested on 1500 data, and also trained with 1500 data and testedon 1000 data, to check the variability of the performance improvement. From Table 20, onecan see that the fuser performance is equal or superior to that of the best classifier, and thefuser provides the best improvements when the fused classifiers are not very efficient, such aswould be expected if more complementarity was present across poor classifiers (Tremblay &Valin, 2002, 2003).

Table 20. Fusion results of classifiers with feed-forward neural networks

Training size Testing size Bayes & DS Bayes & k-NN Bayes, k-NN & DS

1000 1500 81.4% 95% 95.1%

Best single classifier 77.3% 95% 95%

1500 1000 85.3% 95.5% 95.6%

Best single classifier 77.7% 94.6% 94.6%

Note that varying the training/test sample sizes has an effect of less than 0.5%.

4.6.2. Results of the second fuser by a measure-based methodIn this case, one has replaced the DS classifier by a neural network classifier using the input11 attributes with the same number of hidden layers and eight output classes, as describedabove, and one performs instead the classifier fusion using DS theory (Rhéaume et al., 2002).

The most important feature lies in the choice for the masses of the propositions for the classfrom the outputs of the individual classifiers.

For the Bayes classifier, the masses are identified to the a posteriori probabilities ofoccurrence of each class and hence each proposition is a singleton, and there are eight suchpropositions.

For the neural net classifier, the masses are the outputs for each class and hence eachproposition is also a singleton, and there are eight such propositions.

For the k-NN classifier, the situation is more complicated, since two complex propositions areselected, and the assignment of normalized masses for the two propositions is morecomplicated. Thus, if d1 denotes the distance to the nearest neighbour, all classes representedin the hypershell [d1, Cd1] make up a proposition having mass m1

∑ =

=1k

1i i

11

dk

T1m

Page 60: Airborne application of information fusion algorithms to

48 DRDC Valcartier TR 2004-282

where k1 is the number of neighbours in the hypershell. The other proposition is made up ofthe classes of the k-nearest neighbours and has a similar expression for its mass m2

∑ =

= k

1i i

1d

kT1m

with T a normalization constant ensuring that m1+m2=1. An example of results for such a DSfuser is given by the confusion matrix for the eight classes as shown in Table 21.

Table 21. Measure-based method confusion matrix

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8

Class 1 0.963 0.000 0.000 0.000 0.000 0.037 0.000 0.000

Class 2 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000

Class 3 0.000 0.010 0.960 0.030 0.000 0.000 0.000 0.000

Class 4 0.000 0.004 0.000 0.996 0.000 0.000 0.000 0.000

Class 5 0.000 0.000 0.000 0.006 0.989 0.006 0.000 0.000

Class 6 0.015 0.000 0.000 0.000 0.000 0.942 0.000 0.043

Class 7 0.018 0.009 0.000 0.000 0.000 0.000 0.973 0.000

Class 8 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000

Naturally, the results turn out to depend on the hypershell thickness C that contains thevarious classes close to the selected nearest-neighbour hyperpoint, and to the value of k,which thus become parameters affecting somewhat the performance of the fusion. Themeasure-based method can have an AIR as high as 98.1% when the k-NN classifier isproperly selected (k=15 and C=1.2).

4.6.3. Comparing other approaches on the same real FLIR dataThe same FLIR data can be treated through distributed learning for classification. A systemin which an agent network processes observational data and outputs beliefs (in the DS sense)to a fusion centre module (the fuser) is considered (Rogova, Scott & Lollett, 2003). Theagents are modelled using evidential neural networks, whose weights reflect the state oflearning of the agents. One agent processes the seven moments, while another agentprocesses the AR parameters. Training of the network is guided by reinforcements receivedfrom the environment as decisions are made. Two different sequential decision-makingmechanisms were attempted: the first one is based on a pignistic ratio test and the second isbased on the value of information criterion, providing for learning utilities (for more detailssee (Rogova, Scott & Lollett, 2003)). The results for the class recognition rate (the AIR) thatcan be obtained vary from 54.1% to 58.7%, far lower than what can be achieved by fusingseveral classifiers. These low results may be due to the common evidential neural netclassifier design for the two agents, or to the fusion centre functionality, which receives

Page 61: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 49

beliefs from the agents, but makes decisions using pignistic probabilities. It could also be thatdistributed learning by only two agents may not be sufficient for this problem.

Different attributes can also be extracted from the same FLIR data (Demers, 2003). Imagesegmentation can also be performed by a more complicated biologically-motivated algorithm,namely the visual perception segmentation process, rather than simple thresholding. Twoclassifiers can then be built and fused by two fusers.

1. For the first classifier, the segmented ship is partitioned into seven equal sectionsalong the x-axis and into two sections along the y-axis delimited by a centroid, andtwo sets of moments are calculated, one being structural, the other intensity-based,resulting in 14 attributes. The structural moments are computed on the part of thesegmented ship that is above the centroid, i.e., on the discriminating part which isabove the hull. Again DS theory is used to combine the 14 expert opinions, and theoutput is a belief score for each ship's class. This is very similar to the generic DSclassifier discussed previously, but over different attributes.

2. For the second classifier, a template-based method is used. Attribute extraction inthis case consists of computing shape descriptors, which will be used for templatematching. Since the algorithm is quite complex, the reader is referred to Demers,(2003). A transformation must be made for the output to be compatible with theresults of the DS classifier, namely masses summing up to 1.

Results (Demers, 2003) show that the overall accuracy of the template-based classifier isslightly lower (73.1%) than the moment-based DS one (75.5%). Note that the latter DS resultusing 14 different attributes calculated over seven ship sections is quite close to the 74.5% forthe 11 attributes described previously. This can be interpreted to show that attributedetermination (in number and by procedure) is not very important and that an AIR of 75% istypical of DS classifiers.

Two fusers are then tested (Demers, 2003), one using the product rule, resulting in an AIR of80.8%, and one using DS, resulting in an AIR of 80.5%. Again, any fuser increases the AIRsubstantially, because both classifiers have rather poor individual AIRs, and arecomplementary. In this case the improvement was slightly over 6% compared with averageclassifier results (74.3%).

These results should be compared with the results from the neural net fuser when fusing itstwo worst fusers (Bayes and DS, for an average AIR of 76%). The result obtained in this casehas a much better AIR of 84.4% (from Table 6), for an improvement of over 8% rather than6%. This could be interpreted to mean that fuser design is indeed important when fusing poorclassifiers, or alternatively that the choice of the type of classifiers to be fused is important,since classifiers may exhibit varying degrees of complementarity.

4.6.4. Conclusions on FLIR ISM classifiers and fusersThe results indicate that individual classifiers can be a good choice for identification of shipclasses of FLIR images. In our particular case, the individual specialized DS classifiersconsistently perform better.

One also showed that a feed-forward neural network fusing various combinations of Bayes,DS and k-NN classifiers results in better performance for this kind of identification, andshows the most marked improvements when the classifiers are themselves rather poor.

Page 62: Airborne application of information fusion algorithms to

50 DRDC Valcartier TR 2004-282

One also used a DS fuser on Bayes, neural net and k-NN classifiers, with a resultingimprovement in performance of about 3% compared with the first fuser.

In all experiments, the performance of any fuser was always at least as good as the bestclassifier.

Page 63: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 51

5. Conclusions

This report surveyed information fusion algorithms available for use in the airborneapplication of maritime surveillance by the CP-140 (Aurora), for both the positional fusionand the identity information fusion components of the Multi-Sensor Data Fusion (MSDF)function.

For positional fusion, a survey of single-scan and multi-scan association algorithms andpositional update mechanisms, such as Kalman filters (or banks of Kalman filters), wasperformed.

For identity information fusion, it was recognized that the imaging sensors should have theirown Image Support Modules (ISMs) that can generate identity information at different levelsand send it to MSDF by reasoning over the imagery attributes which are sometimes toodetailed to form part of the a priori databases, e.g., superstructure distribution for the SSAR.

It was therefore imperative to survey the reasoning frameworks common in the artificialintelligence field, and to select those which are appropriate to deal with dissimilar attributedata coming from the imaging sensors involved in airborne data/information fusion. Amongthese reasoning frameworks, the most notable are fuzzy logic, neural networks, Bayesianreasoning, k-nearest neighbours and Dempster-Shafer evidential reasoning. Each of these willfind an application in the design of Image Support Modules (ISM) for the existing Forward-Looking Infra Red (FLIR) and the upcoming Spotlight Synthetic Aperture Radar (SSAR).

Using these reasoning frameworks in a very selective and justified manner, the ISMs aredesigned according to the physical properties of the imagery which they aim to classify. Theperformance of these ISMs was demonstrated in a stand-alone fashion, leaving for aconcluding technical report to integrate them into the full-fledged MSDF for the Aurora.

A hierarchical SSAR ISM is preferred, which is composed of a neural net for categorydefinition, and a Bayes length classifier if the category has been found to be a line combatantship, with additional neural nets for subtype definition when available imagery warrants it.The data used for training, validation and testing used both simulated imagery from a DRDC-O simulator and real imagery from a preliminary working version of the SSAR, withcomparable results.

In the case of low-contrast FLIR imagery, it was shown that fusion of complementary FLIRclassifiers can lead to excellent performance. Four classifiers were implemented (NN, Bayes,k-nearest neighbours, DS) and fused two different ways (NN and DS). In all cases, the fusedresults were better than the best classifier. The data used were unclassified airborne data forthe FLIR, obtained from the Naval Air Warfare Center at China Lake (USA) through theUniversity of California at Irvine, and do not contain the full set of ships identified in the firstreport of this series. As a consequence, the results will stand alone and will not be integratedinto the third report of this series.

Page 64: Airborne application of information fusion algorithms to

52 DRDC Valcartier TR 2004-282

6. References

Boily, D., & Valin, P., (2002). Optimization and Benchmarking of Truncated Dempster-Shafer for Airborne Surveillance, NATO Advanced Research Workshop on Multisensor DataFusion, Pitlochry, Scotland, United Kingdom, June 25 July 7 2000 (Kluwer AcademicPublishers), NATO Science Series, II. Mathematics Physics and Chemistry Vol. 70, pp. 617-624, published in 2002.

Demers, H. (2003). Fusion of Two Imagery Classifiers: A Case Study, in Proceedings of theNATO ASI, on Data Fusion for Situation Monitoring, Incident Detection, Alert and ResponseManagement, held in Armenia, E. Shahbazian, G. Rogova and P. Valin, eds., 18-29 August2003, Springer-Verlag, in press.

Dempster, A. (1967). Upper and Lower Probabilities Induced by Multivalued Mapping, Ann.Math. Statist., Vol. 38, pp.325-339, 1967.

Hecht-Nielsen, R. (1990), Neurocomputing, Addison-Wesley, Reading MA, 1990.

Henrich, W., Kausch, T., & Opitz, F. (2003). Data Fusion for the new German F124 FrigateConcept and Architecture, Proceedings of the 6th International Conference on InformationFusion, FUSION 2003, Cairns, Queensland, Australia, 8-11 July 2003, CD-ROM ISBN 0-9721844-3-0, and paper proceedings, pp. 1342-1349, 2003.

Henrich, W., Kausch, T., & Opitz, F. (2004). Data Fusion for the Fast Attack Craft Squadron2000: Concept and Architecture, Proceedings of the 6th International Conference onInformation Fusion, FUSION 2004, Stockholm, Sweden, 29 June to 1 July 2004, CD-ROMISBN 91-7170-000-00, and on the Internet at http://www.fusion2004.foi.se/papers/IF04-0842.pdf , 2004.

Hornik, K., Stinhcombe, M., & White, H. (1989). Multilayer feed-forward networks areuniversal approximators, Neural Networks, Vol. 2, pp. 259-366, 1989.

Hu, M.K. (1962). Visual Pattern Recognition by moment invariant, IEE TransactionsInformation Theory, IT-8, (1962), pp.179-187.

Jane s Information Group (various years from 1979 to 1997):

Collection of Jane s Fighting Ships, London:

1995-96, edited by Cpt. Richard Sharpe

1993-94, edited by Cpt. Richard Sharpe

1991-92, edited by Cpt. Richard Sharpe

1989-90, edited by Cpt. Richard Sharpe

1987-88, edited by Cpt. John Moore

1984-85, edited by Cpt. John Moore

1979-80, edited by Cpt. John Moore

Jane s Merchant Ships 1996-97, edited by David Greenman

Page 65: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 53

Kittler, J. & Roli, F. (2000). Multiple Classifiers Systems, Volume 1857, Springer-Verlag,Berlin, 2000.

Klepko, R. (1995). Automatic Pattern Classification of Airborne SAR Images of Ships,DRDC-O Report #1283, Ottawa, 1995 (DND Classified).

Konen, W., & Schulze-Kruger, E. (1995). ZN-Face: A System for Access Control UsingAutomated Face Recognition, Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, M. Bichsel, Ed., Univ. Zurich, p. 18-23 (1995)

Kälviäinen, H., Hirvonen, P. & Oja, E. (1996). Houghtool -- a software package for the use ofthe Hough transform, Pattern Recognition Letters, volume 17, number 8, (1996), pp. 889-897.Lefebvre, D. (1999). Classification de cibles navales à l aide de reseaux de neurones a partir

images radar a synthèse d ouverture, M.Sc. thesis, Université de Montréal, December1999.

Osman, H., Blostein, S.D., & Gagnon, L. (1997). Classification of Ships in Airborne SARImagery Using Backpropagation Neural Networks, in Radar Processing, Technology andApplications II, SPIE Annual Meeting, Proceedings of Conf. 3161, San Diego, 27 July - 2August 1997, pp. 126-136, 1997.

Park, Y., & Sklansky, J, (1990). Automated Design of Linear Tree Classifiers, PatternRecognition, Vol. 23, No. 12, (1990), pp.1393-1412.

Rao, N.S.V. (2001). On Design and Performance of Metafusers, in Proceedings of theWorkshop on Estimation, Tracking and Fusion: A tribute to Yaakov Bar-Shalom, Monterey,CA, May 2001, pp. 259-268.

Rao, N.S.V. (2002). Multisensor fusion under unknown distributions: Finite sampleperformance guarantees, in: Multisensor Fusion, A.K. Hyder, E. Shahbazian and E. Waltz(editors), Kluwer Academic Publishers, 2002.

Rhéaume, F., Jousselme, A-L, Grenier, D., Bossé, E., and Valin, P., New Initial BasicProbability Assignments for Multiple Classifiers, in SPIE Aerosense 2002, Orlando, Florida,April 1-5, 2002, Orlando, Florida, SPIE Vol. 4729, pp. 319-328.

Rogova, G.L., Scott, P., & Lollett, C. (2003). Distributed Fusion: Learning in multi-agentsystems for time critical decision making, in Proceedings of the NATO ASI, on Data Fusionfor Situation Monitoring, Incident Detection, Alert and Response Management, held inArmenia, E. Shahbazian, G. Rogova and P. Valin, eds., 18-29 August 2003, Springer-Verlag,in press.

Shafer, G. (1976). A Mathematical Theory of Evidence, Princeton University Press, 1976.

Steinberg, A.N., Bowman, C.L., & White, F.E. (1999). Revisions to the JDL Data FusionModel , in Sensor Fusion: Architectures, Algorithms, and Applications, SPIE Proceedings,Vol. 3719, 1999.

Tremblay C., & Valin P., (2002). Experiments on Individual Classifiers and on Fusion of aSet of Classifiers, in FUSION 2002, Annapolis, MD, 8-11 July 2002, pp. 272-277, 2002.

Tremblay C., & Valin P. (2003). Dempster-Shafer Classifiers for FLIR Imagery and NeuralNet Fusion of Complementary Classifiers, RTO SET-059 Symposium on Target Tracking

Page 66: Airborne application of information fusion algorithms to

54 DRDC Valcartier TR 2004-282

and Sensor Data Fusion for Military Observation Systems , Budapest, Hungary, 15-17October 2003.

Valin, P. (2000) Reasoning Frameworks, NATO Advanced Study Institute on Multisensor andSensor Data Fusion, Pitlochry, Scotland, United Kingdom, June 25 July 7, 2000 (KluwerAcademic Publishers), NATO Science Series, II. Mathematics Physics and Chemistry Vol.70, pp. 223-246, 2000.

Valin, P. (2001). Reasoning Frameworks for Fusion of Imaging and Non-imaging SensorInformation, in Proceedings of the Workshop on Estimation, Tracking and Fusion: A Tributeto Yaakov Bar-Shalom, Naval Postgraduate School, Monterey, CA, May 17, 2001, pp. 269-282, 2001.

Valin, P. (2002). Methods for the Fusion of Multiple FLIR Classifiers, in Proceedings of theWorkshop on Signal Processing, Communication, Chaos and Systems: a Tribute to RabinderN. Madan, June 20, 2002, Newport, RI, pp. 117-122, 2002.

Valin, P., & Boily, D., (2000). Truncated Dempster-Shafer Optimization and Benchmarking,in Sensor Fusion: Architectures, Algorithms, and Applications IV , SPIE Aerosense 2000,Orlando, Florida, April 24-28, 2000, Vol. 4051, pp. 237-246, 2000.

Valin, P., & Bossé, E., (2003). Using a priori databases for identity estimation throughevidential reasoning in realistic scenarios, RTO IST Symposium on Military Data andInformation Fusion , Prague, Czech Republic, 20-22 October 2003.

Valin, P., Tessier, Y., & Jouan, A. (1999). Hierarchical Ship Classifier for Airborne SyntheticAperture Radar (SAR) Images, in Proceedings of the 33rd Asilomar Conference on Signal,Systems and Computer, 24-27 Oct. 1999, Pacific Grove, CA, pp. 1230-1234, 1999.

Zadeh, L.A. (1965). Fuzzy sets, Information & Control., Vol. 8, pp. 338-353, 1965.

Zadeh, L.A. (1968). Fuzzy algorithms, Information & Control, Vol. 12, pp. 94-102, 1968.

Page 67: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 55

7. Acronyms

The list of acronyms presented here serves for all three documents TM-281, TR-282 and TR-283.

AAM Air-to-Air MissileADM Advanced Development ModelAIFIE Attribute Information Fusion techniques for target Identity EstimationAIMP Aurora Incremental Modernization ProjectAIR Average ID RateAOP Area Of ProbabilityAR Auto-RegressiveASCACT Advanced Shipborne Command and Control TechnologyASM Air-to-Surface MissileAWACS Airborne Warning and Control SystemBB BlackBoardBPA Basic Probability AssignmentBPAM Bayesian Percent Attribute MissC2 Command and ControlC4I Command, Control, Communications, Computer and IntelligenceCAD Computer-Aided DesignCANEWS Canadian Electronic Warfare SystemCASE-ATTI Concept Analysis and Simulation Environment for Automatic Target

Tracking and IdentificationCCIS Command and Control Information SystemCCS Command and Control SystemCDO Counter-Drug OperationsCF Canadian ForcesCIO Communications Intercept OperatorCIWS Close-In Weapon SystemCL Confidence LevelCM Centre of MassCOMDAT Command Decision Aid TechnologyCOMINT Communications IntelligenceCOTS Commercial Off-The-ShelfCPF Canadian Patrol FrigateCPU Central Processing UnitCRAD Chief of Research and DevelopmentCSES Combat System Engineering ServicesCSIS Combat Support In-ServiceDAAS Decision Aids for Airborne SurveillanceDF Data FusionDFCP Data Fusion between Collaborating PlatformsDFDM Data Fusion Demonstration Model

Page 68: Airborne application of information fusion algorithms to

56 DRDC Valcartier TR 2004-282

DFS Direct Fleet SupportDM Data MileDMS Data Management SystemDPG Defence Planning GuidanceDRDC-O Defence R&D Canada OttawaDRDC-V Defence R&D Canada ValcartierDS Dempster-ShaferDSC Digital Scan ConverterEDM Engineering Development ModelEGI Embedded GPS and INSELINT Electronic IntelligenceELNOT ELINT NotationEMCON Emission ControlENL Emitter Name ListEO Electro-OpticESM Electronic Support MeasuresFLIR Forward Looking Infra-RedFM Frequency ModulationGIS Geographical Information SystemGPAF General Purpose Air ForcesGPDC General Purpose Digital ComputerGPL Geo-Political ListingGPS Global Positioning SystemHCI Human Computer InterfaceHLA High Level ArchitectureHW HardwareID IdentificationIFF Identification Friend or FoeIMM Interacting Multiple ModelINS Inertial Navigation SystemIR Infra-RedIRST Infra-Red Search and TrackISAR Inverse SARISIF International Society of Information FusionISM Image Support ModuleISTDS Internal System Track Data StoreJAIF Journal of Advances in Information FusionJDL Joint Directors of LaboratoriesJPDA Joint Probabilistic Data AssociationJVC Jonker-Volgenant-CastanonKBS Knowledge-Based SystemLAMPS Light Airborne Multi-Purpose SystemLAP Local Area PictureLM Lockheed MartinMAAO Maritime Air Area OperationsMAD Magnetic Anomaly Detector

Page 69: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 57

MALO Maritime Air Littoral OperationsMARCOT Maritime Coordinated Operational TrainingMFA Multi-Frame AssociationMHP Maritime Helicopter ProjectMHT Multiple Hypothesis TrackingMIL-STD Military Standard (US)MOP Measure Of PerformanceMSDF Multi-Source Data FusionMSP Maritime Sovereignty PatrolMTP Maritime Tactical PictureNASO Non-Acoustic Sensor OperatorNATO North Atlantic Treaty OrganizationNAVCOM Navigation CommunicationNAWC Naval Air Warfare CenterNCW Network Centric WarfareNEOps Network-Enabled OperationsNILE NATO Improved Link ElevenNM Nautical MileNN Neural NetworkOMI Operator-Machine InterfaceOODA Observe, Orient, Decide, ActOR Object RecognitionOTHT Over-The-Horizon-TargetingPDA Probabilistic Data AssociationPDB Platform Data BasePU Participating UnitPWGSC Public Works and Government Services CanadaR&D Research and DevelopmentRATT Radio TeletypeRCMP Royal Canadian Mounted PoliceRCS Radar Cross-SectionRDP Range Doppler ProfilerRM Resource ManagementRMP Recognized Maritime PictureROI Region Of InterestRPM Revolutions Per MinuteSAM Surface-to-Air MissileSAR Synthetic Aperture RadarSARP SAR ProcessorSC Ship CategorySDC-S Signal Data Converter-StorerSHINPADS Shipboard Integrated Processing And Display SystemSKAD Survival Kit Air DroppableSL Ship LengthSNNS Stuttgart Neural Net SimulatorSNR Signal-to-Noise Ratio

Page 70: Airborne application of information fusion algorithms to

58 DRDC Valcartier TR 2004-282

SS Sea StateSSAR Spotlight SARSSC Surface Surveillance and ControlSSM Surface-to-Surface MissileST Ship TypeSTA Situation and Threat AssessmentSTANAG Standardization NATO AgreementSTIM StimulationSW SoftwareTACNAV Tactical NavigationTD Technology DemonstratorTDS Truncated DSTM Track ManagementUN United NationsUSC Underwater Surveillance and ControlVOI Volume Of InterestWAP Wide Area PictureXDM eXperimental Development Model

Page 71: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 59

8. Annexes

8.1. General data/information fusion sourcesSince information/data fusion is an emerging science that incorporates elements of physics,engineering, mathematical physics, and computational science, the International Society ofInformation Fusion (ISIF) was created in 1998, with a constitution approved in April 2000.

For ISIF, information fusion encompasses the theory, techniques and tools conceived andemployed for exploiting the synergy in the information acquired from multiple sources(sensors, databases, information gathered by humans, etc.), such that the resulting decision oraction is in some sense better (qualitatively or quantitatively, in terms of accuracy, robustness,etc.) than would be possible if any of these sources were used individually without suchsynergy exploitation. In doing so, events, activities and movements will be correlated andanalyzed as they occur in time and space, to determine the location, identity and status ofindividual objects (equipment and units), to assess the situation, to qualitatively andquantitatively determine threats, and to detect patterns in activity that reveal intent orcapability. Specific technologies are required to refine, direct and manage information fusioncapabilities.

The ISIF web site at http://www.inforfusion.org contains much of the crucial documentationin the whole domain. The results contained in this series of reports was presented in part atthe first seven ISIF-sponsored FUSION conferences in

2004: Stockholm, Sweden, at http://www.fusion2004.org/

2003: Cairns, Queensland, Australia at http://fusion2003.ee.mu.oz.au/

2002: Annapolis, Maryland, USA athttp://www.inforfusion.org/Fusion_2002_Website/index.htm

2001: Montreal, Quebec, Canada at http://omega.crm.umontreal.ca/fusion/, with bothLockheed Martin Canada and DRDC-V as sponsors

2000: Paris, France, at http://www.onera.fr/fusion2000/

1999: Sunnyvale, California, USA, at http://www.inforfusion.org/fusion99/, duringwhich the concept of ISIF first emerged

1998: Las Vegas, Nevada, USA, at http://www.inforfusion.org/fusion98/

The eighth such conference was held in Philadelphia, Pennsylvania, USA, on July 25-28,2005 (see http://www.fusion2005.org/ for more details).

In addition, summaries were presented internationally for NATO through their ResearchTechnology Agency (RTA) symposia and their Advanced Study Institutes (ASI). Othervenues where this research work was promulgated include the SPIE Aerosense series held inOrlando each year, and various other conferences. The SPIE Aerosense series was recentlyrenamed SPIE Defense & Security Symposium.

The ISIF community is also served by the Information Fusion journal published by Elsevier(see http://www.elsevier.com/wps/find/journaldescription.cws_home/620862/description formore information), and has an on-line journal of its own, the Journal of Advances in

Page 72: Airborne application of information fusion algorithms to

60 DRDC Valcartier TR 2004-282

Information Fusion (JAIF), with information for submissions athttp://www.inforfusion.org/JAIF-CFP-Oct28.htm.

As for the documentation specifically needed for this report, the section entitled Referencescontains the complete list.

8.2. Specific related data/information fusion sourcesThe contents of this series of three reports is based on two contracts entitled

• Demonstrations of Data Fusion Concepts for Airborne Surveillance , and

• Demonstration of Image Analysis and Object Recognition Decision Aids forAirborne Surveillance ,

with the following 14 deliverables (the date of the first publication of each report is shown,and the date of the final revision whenever applicable):

1. LM Canada Doc. No. 990001006, (1997a). MSDF Requirements SpecificationDocument for Year 1 of PWGSC Contract No. W7701-6-4081 on Real-Time Issuesand Demonstrations of Data Fusion Concepts for Airborne Surveillance (andreferences therein), final Rev.1 dated 27 September 1999.

2. LM Canada Doc. No. 990001007, (1997b). MSDF Design Document for Year 1 ofPWGSC Contract No. W7701-6-4081 on Real-Time Issues and Demonstrations ofData Fusion Concepts for Airborne Surveillance (and references therein), final Rev.1dated 27 September 1999.

3. LM Canada Doc. No. 990001008, (1998a). MSDF Implementation and TestDocument for Year 1 of PWGSC Contract No. W7701-6-4081 on Real-Time Issuesand Demonstrations of Data Fusion Concepts for Airborne Surveillance (andreferences therein), final Rev.1 dated 27 September 1999.

4. LM Canada Doc. No. 990001009, (1998b). MSDF Requirements SpecificationDocument for Year 2 of PWGSC Contract No. W7701-6-4081 on Real-Time Issuesand Demonstrations of Data Fusion Concepts for Airborne Surveillance (andreferences therein), final Rev.1 dated 27 September 1999.

5. LM Canada Doc. No. 990001010, (1998c). MSDF Design Document for Year 2 ofPWGSC Contract No. W7701-6-4081 on Real-Time Issues and Demonstrations ofData Fusion Concepts for Airborne Surveillance (and references therein), final Rev.1dated 27 September 1999.

6. LM Canada Doc. No. 990001011, (1999). MSDF Implementation and Test Documentfor Year 2 of PWGSC Contract No. W7701-6-4081 on Real-Time Issues andDemonstrations of Data Fusion Concepts for Airborne Surveillance (and referencestherein), final Rev.1 dated 27 September 1999.

7. LM Canada Doc. No. 990001012, (2000a). MSDF Requirements SpecificationDocument for Year 3 of Contract No. W7701-6-4081 on Real-Time Issues andDemonstrations of Data Fusion Concepts for Airborne Surveillance (and referencestherein), Rev. 0, 23 February 2000.

Page 73: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 61

8. LM Canada Doc. No. 990001013, (2000b). MSDF Design Document for Year 3 ofPWGSC Contract No. W7701-6-4081 on Real-Time Issues and Demonstrations ofData Fusion Concepts for Airborne Surveillance (and references therein), Rev. 0, 23February 2000.

9. LM Canada Doc. No. 990001014, (2000c). MSDF Implementation and TestDocument for Year 3 of PWGSC Contract No. W7701-6-4081 on Real-Time Issuesand Demonstrations of Data Fusion Concepts for Airborne Surveillance (andreferences therein), Rev. 1, 20 March 2000.

10. LM Canada DM No. 990001234-a, (2001a). Detailed Design Document - Part 1,Demonstrations of Image Analysis and Object Recognition Decision Aids forAirborne Surveillance, Contract No. W2207-8-EC01, Rev. 0, 22 January 2001.

11. LM Canada DM No. 990001234-b, (2001b). Detailed Design Document - Part 2,Demonstrations of Image Analysis and Object Recognition Decision Aids forAirborne Surveillance, Contract No. W2207-8-EC01, Rev. 0, 22 January 2001.

12. LM Canada DM No. 990001235-a, (2001c). Testing and Benchmarking IMM-CVCAvs Kalman Filtering, Demonstrations of Image Analysis and Object RecognitionDecision Aids for Airborne Surveillance, Contract No. W2207-8-EC01, Rev. 0, 22January 2001.

13. LM Canada DM No. 990001235-b, (2001d). Testing and Benchmarking ShipClassifier for SAR Imagery, Demonstrations of Image Analysis and ObjectRecognition Decision Aids for Airborne Surveillance, Contract No. W2207-8-EC01,Rev. 0, 22 January 2001.

14. LM Canada DM No. 990001236, (2001e). Final Report, Demonstrations of ImageAnalysis and Object Recognition Decision Aids for Airborne Surveillance. ContractNo. W2207-8-EC01, Rev. 0, 22 January 2001.

Page 74: Airborne application of information fusion algorithms to

62 DRDC Valcartier TR 2004-282

This page intentionally left blank.

Page 75: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 63

9. Distribution list

INTERNAL DISTRIBUTION

DRDC Valcartier TR 2004-282

1 - Director General

3 - Document Library

1 - Head/DSS

1 - Head/IKM

1 - Head/SOS

1 - P. Valin (author)

1 - A. Jouan

1 - R. Breton

1 - J.M.J. Roy

1 - S. Paradis

1 - A. Guitouni

1 - LCdr E. Woodliffe

1 - Maj. G. Clairoux

1 - Maj. M. Gareau

1 - Maj. B. Deschênes

1 - LCdr É. Tremblay

1 - P. Maupin

1 - A-L. Jousselme

1 - F. Rhéaume

1 - A. Benaskeur

1 - M. Allouche

1 - A.C. Boury-Brisset

Page 76: Airborne application of information fusion algorithms to

64 DRDC Valcartier TR 2004-282

EXTERNAL DISTRIBUTION

DRDC Valcartier TR 2004-282

1 - Director Research and Development Knowledge andInformation Management (PDF file)

1 - Defence Research and Development Canada

3 - Director General Operational Research

1 - 3d Thrust Leader: Doreen Dyck, DRDC-Ottawa

4 - Director General Joint Force Development

2 - Director Science and Technology (C4ISR)

2 - Director Science and Technology (Air)

2 - Director Science and Technology (Land)

2 - Director Science and Technology (Maritime)

2 - Director Science and Technology (HumanPerformance)

2 - Director Maritime Requirements Sea

1 - Director Maritime Requirements Sea 4

1 - Director Maritime Requirements Sea 6

1 - Director Maritime Requirements Sea 6-2

2 - Director Air Requirements

1 - Director Air Requirements 4

1 - Director Air Requirements 3

2 - Director Maritime Ship Support

2 - Director Maritime Ship Support 6

2 - Director Maritime Ship Support 8

1 - Director Science and Technology Air - 3 (3d)

1 - Director Science and Technology Maritime - 2 (1b)

1 - Director Science and Technology Maritime - 3 (1a)

1 - Director Science and Technology Maritime - 5 (1c)

1 - Director Science and Technology Land - 2

2 - Director Land Requirements

1 - Canadian Forces Experimentation Centre

Page 77: Airborne application of information fusion algorithms to

DRDC Valcartier TR 2004-282 65

EXTERNAL DISTRIBUTION (cont d)

DRDC Valcartier TR 2004-282

2 - Defence Research and Development Canada -Toronto:

Attn: R. Pigeau

K. Hendy

5 - Defence Research and Development Canada -Atlantic:

Attn: J.S. Kennedy

B. Chalmers

B. McArthur

M. Hazen

LCdr B. MacLennan

2 - Defence Research and Development Canada Ottawa:

Attn: P. Lavoie

A. Damini

1 - PMO MHP

Attn: P. Labrosse

Col. W.O. Istchenko

1 - PMO HMCCS

Attn: DMSS 8

1 - PMO Aurora

GB Lewis, DAEPMM 5

1 - Evolved Sea Sparrow Missile Project Manager

1 - Halifax Modernized Command and Control SystemProject Manager

1 - Canadian Forces Command and Staff College Toronto Attn: Commanding Officer

1 - Canadian Forces Maritime Warfare School CFB Halifax Halifax, Nova Scotia Attn: Commanding Officer

Page 78: Airborne application of information fusion algorithms to

66 DRDC Valcartier TR 2004-282

EXTERNAL DISTRIBUTION (cont d)

DRDC Valcartier TR 2004-282

2 - Canadian Forces Maritime Warfare Centre CFB Halifax, NS Attn: TAC AAW OIC Modeling and Simulation

2 - Canadian Forces Naval Operations School CFB Halifax, NS Attn: Tactics CT AWW

1 - Canadian Forces Naval Engineering School CFB Halifax, NS Attn: CSST

1 - Operational Requirements Analysis Cell CFB Halifax, NS Attn: Commanding Officer

1 - Canadian Forces Fleet School CFB Esquimalt, BC Attn: Commanding Officer/WTD

1 - Operational Requirements Analysis Cell CFB Esquimalt, BC Attn: Commanding Officer

Page 79: Airborne application of information fusion algorithms to

dcd03e rev.(10-1999)

UNCLASSIFIED SECURITY CLASSIFICATION OF FORM

(Highest Classification of Title, Abstract, Keywords)

DOCUMENT CONTROL DATA

1. ORIGINATOR (name and address) Defence R&D Canada Valcartier 2459 Pie-XI Blvd. North QUEBEC, QC G3J 1X5

2. SECURITY CLASSIFICATION (Including special warning terms if applicable) Unclassified

3. TITLE (Its classification should be indicated by the appropriate abbreviation (S, C, R or U) Airborne application of information fusion algorithms to classification (U)

4. AUTHORS (Last name, first name, middle initial. If military, show rank, e.g. Doe, Maj. John E.) Valin, P., Bossé, E., Jouan, A.

5. DATE OF PUBLICATION (month and year) May 2006

6a. NO. OF PAGES 64

6b .NO. OF REFERENCES 39

7. DESCRIPTIVE NOTES (the category of the document, e.g. technical report, technical note or memorandum. Give the inclusive dates when a specific reporting period is covered.)

Technical Report

8. SPONSORING ACTIVITY (name and address)

9a. PROJECT OR GRANT NO. (Please specify whether project or grant) 13DV

9b. CONTRACT NO.

10a. ORIGINATOR’S DOCUMENT NUMBER TR 2004-282

10b. OTHER DOCUMENT NOS

N/A

11. DOCUMENT AVAILABILITY (any limitations on further dissemination of the document, other than those imposed by security classification)

Unlimited distribution Restricted to contractors in approved countries (specify) Restricted to Canadian contractors (with need-to-know) Restricted to Government (with need-to-know) Restricted to Defense departments Others

12. DOCUMENT ANNOUNCEMENT (any limitation to the bibliographic announcement of this document. This will normally correspond to the Document Availability (11). However, where further distribution (beyond the audience specified in 11) is possible, a wider announcement audience may be selected.)

UNCLASSIFIED

SECURITY CLASSIFICATION OF FORM (Highest Classification of Title, Abstract, Keywords)

Page 80: Airborne application of information fusion algorithms to

dcd03e rev.(10-1999)

UNCLASSIFIED SECURITY CLASSIFICATION OF FORM

(Highest Classification of Title, Abstract, Keywords)

13. ABSTRACT (a brief and factual summary of the document. It may also appear elsewhere in the body of the document itself. It is highly desirable that the abstract of classified documents be unclassified. Each paragraph of the abstract shall begin with an indication of the security classification of the information in the paragraph (unless the document itself is unclassified) represented as (S), (C), (R), or (U). It is not necessary to include here abstracts in both official languages unless the text is bilingual).

The objective of the report is to survey the reasoning frameworks common in the artificial intelligence field for identity information fusion, and to select those that are appropriate to deal with dissimilar data coming from sensors involved in airborne data/information fusion. The Image Support Module (ISM) for the existing Forward-Looking Infra Red (FLIR) will make use of many of these reasoning frameworks in parallel, and actually fuse the results coming from these complementary classifiers. The ISM for the upcoming Spotlight Synthetic Aperture Radar (SSAR) will incorporate some of these reasoning methods in a hierarchical manner to provide multiple inputs to the Multi-Sensor Data Fusion (MSDF) module. The data used are be a combination of simulated and real imagery for the SSAR and unclassified airborne data for the FLIR, obtained from Chinalake through the University of California at Irvine.

14. KEYWORDS, DESCRIPTORS or IDENTIFIERS (technically meaningful terms or short phrases that characterize a document and could be helpful in cataloguing the document. They should be selected so that no security classification is required. Identifiers, such as equipment model designation, trade name, military project code name, geographic location may also be included. If possible keywords should be selected from a published thesaurus, e.g. Thesaurus of Engineering and Scientific Terms (TEST) and that thesaurus-identified. If it is not possible to select indexing terms which are Unclassified, the classification of each should be indicated as with the title.)

Information fusion, CP-140 Aurora, Dempster-Shafer, Bayes, fuzzy logic, neural networks, surveillance, scenarios, SAR, simulator, FLIR, classifier, Image Support Module, fusion of classifiers.

UNCLASSIFIED

SECURITY CLASSIFICATION OF FORM (Highest Classification of Title, Abstract, Keywords)

Page 81: Airborne application of information fusion algorithms to
Page 82: Airborne application of information fusion algorithms to

Canada’s Leader in Defenceand National Security

Science and Technology

Chef de file au Canada en matièrede science et de technologie pourla défense et la sécurité nationale

WWW.drdc-rddc.gc.ca

Defence R&D Canada R & D pour la défense Canada