research article wavelet neural network using multiple...
TRANSCRIPT
Hindawi Publishing CorporationThe Scientific World JournalVolume 2013 Article ID 632437 7 pageshttpdxdoiorg1011552013632437
Research ArticleWavelet Neural Network Using Multiple Wavelet Functions inTarget Threat Assessment
Gaige Wang12 Lihong Guo1 and Hong Duan3
1 Changchun Institute of Optics Fine Mechanics and Physics Chinese Academy of Sciences Changchun 130033 China2Graduate School of Chinese Academy of Sciences Beijing 100039 China3School of Computer Science and Information Technology Northeast Normal University Changchun 130117 China
Correspondence should be addressed to Lihong Guo guolhciompaccn
Received 13 December 2012 Accepted 15 January 2013
Academic Editors J Bajo and Q Zhao
Copyright copy 2013 Gaige Wang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Target threat assessment is a key issue in the collaborative attack To improve the accuracy and usefulness of target threat assessmentin the aerial combat we propose a variant of wavelet neural networks MWFWNN network to solve threat assessment Howto select the appropriate wavelet function is difficult when constructing wavelet neural network This paper proposes a waveletmother function selection algorithmwith minimummean squared error and then constructs MWFWNNnetwork using the abovealgorithm Firstly it needs to establish wavelet function library secondly wavelet neural network is constructed with each waveletmother function in the library and wavelet function parameters and the network weights are updated according to the relevantmodifying formula The constructed wavelet neural network is detected with training set and then optimal wavelet function withminimum mean squared error is chosen to build MWFWNN network Experimental results show that the mean squared error is123 times 10
minus3 which is better than WNN BP and PSO SVM Target threat assessment model based on the MWFWNN has a goodpredictive ability so it can quickly and accurately complete target threat assessment
1 Introduction
With the development of science and technology the require-ment of information is increasingly improving in modernwarfare To adapt to this change many countries have begunthe research of multisensor information fusion from the1970s After years of research the United States Britain andother military powers have developed a number of informa-tion fusion systems which can be used for combat Targetthreat assessment belongs to the third level in informationfusion model and is a kind of high-level information fusionThe target threat assessment is the essential basis for theallocation of force and fire in C4ISR system
The traditional methods to solve threat assessment areBayesian inference [1 2] multiattribute decision-making the-ory [3] GSOBP [4] Elman AdaBoost [5] analytic hierarchyprocess [6] Dempster-Shafer theory [7] Hypothesis-drive[8] and so forthThesemethods are based on constant weight
vector and must rely on expertise available which makesthese methods significantly increase subjective factor ofthreat assessment and it is highly possible to evaluate inaccu-rate resultsmaking the complex relationship between evalua-tion indicators not effectively reflected In addition themodels created by these methods have a fatal drawback thatis these models do not have the self-learning and adap-tive capacity so it is difficult to adapt to change as the changeof the enemy air attack weapons performance and tacticalmeans brought to the weights of each factor changes As theneural network has many advantages such as strong learningability and adaptability it is adept at working out the tar-get threat assessment compared with the above-mentionedmethods BP is a mature and effective method with theadvantages of rigorous derivation process solid theoreticalbasis strong versatility and clear physical concepts Litera-ture [7 8] studies threat assessment using BP networks andachieve good resultsHowever with the dimension of training
2 The Scientific World Journal
data increasing the convergence of BP slows down and thenetwork performance deteriorates moreover in the trainingprocess it is easy to fall into local minimum solutionWaveletneural network (WNN) has many advantages compared withother neural networks for example the parameters (hiddennodes and weight) aremore easily determined than the radialbasis function (RBF) neural networks it requires smallertraining amount than multilayer perceptron network alsowavelet neural network has a fast convergence In the sameapproximation quality the wavelet neural network requiresfewer nodes For WNN one of the biggest drawbacks is thedifficulty of the choice of mother wavelet function so thispaper proposes an algorithm for selecting optimal motherwavelet function And thenwe construct the (MultipleWave-let FunctionWaveletNeuralNetworks)MWFWNNusing theabove method to select optimal mother wavelet function forthreat assessment
2 MWFWNN Network
21 Wavelet Theory Firstly proposed by Grossman andMorlet in the 1980s wavelet theory [9] is a mathematicaltheory and analysis method to make up the shortages ofFourier transform In the field of signal processing the mostwidely used analysis method is the Fourier transform butit has obvious deficiency that the Fourier transform hasno distinguishable ability in the time domain because thetime information is not included in the results of Fouriertransform Wavelet is special waveform with the mean 0 andthe limited length
Wavelet function is constructed through a series of basictransformation with a mother wavelet function Not all fun-ctions can be used as wavelet mother function if a waveletfunction is to be available and then develop into a good wave-let transform function it must satisfy many conditionsTherefore it is difficult to find the practical wavelet functionIn the practical wavelet functions some of them do not haveexpressions
Let 120593(119905) be a square integrable function that is 120593(119905) isin1198712
(119877) If its Fourier transform Ψ(120596) can satisfy the followingcompatibility condition
int
119877
|Ψ (120596)|2
120596d120596 lt infin (1)
then 120593(119905) is called a basic wavelet or mother wavelet functionWe make translation and scale for wavelet function thetranslation factor 120591 and the scale factor (also known as theexpansion factor) 119886 so that we get function 120593
119886120591
(119905)
120593119886120591
(119905) = 11988612
Ψ(119905 minus 120591
119886) 119886 gt 0 120591 isin 119877 (2)
As the translation factor 120591 and the scale factor 119886 are con-tinuous variables their value can be positive or negative soΨ119886120591
(120596) is called continuous wavelet function (also called themother wavelet function)
Wavelet transform calculates the inner product betweenthe signal 119909(119905) with mother wavelet function
119891119909
(119886 120591) =1
radic119886int
infin
minusinfin
119909 (119905) 120593lowast
(119905 minus 120591
119886) d119905 (3)
Equivalent expression in time domain is given as
119891119909
(119886 120591) =radic119886
2120587int
infin
minusinfin
119883 (120596)Ψlowast
(119886120596) 119890119895120596120591d120596 (4)
where 119886 gt 0 120591 isin 119877 119883(120596) and Ψ(120596) are the Fourier trans-form of 119909(119905) and 120593(119905) respectively
The conclusion can be drawn from (3) and (4) that iswavelet analysis can analyze the local characteristics of thesignal through the mother wavelet function transformationtherefore wavelet theory is considered to be the break-through for the Fourier transform and the theory has beensuccessfully applied to image processing optical devicesdetection and signal analysis and other fields
22 Wavelet Neural Network Wavelet transform has time-frequency localization property and focal features and neuralnetwork (NN) has self-adaptive fault tolerance robustnessand strong inference ability How to combine the advantagesof wavelet transform and NN to solve practical problemshas been one of the hot spots So-called wavelet neuralnetwork (WNN) or wavelet network (WN) is a variety oftwo techniques and inherits the advantages of the neuralnetwork and wavelet transformation Proposed by Q Zhangin 1992 [10] WNN uses the wavelet function as the activationfunction instead of the Sigmoid activation function
ForWNN its topology is based on BP network the trans-fer function of hidden layer nodes is themother wavelet func-tion and the network signal is prior to transmission whileerror is backpropagation in the training processThe networktopology is shown in Figure 1 In Figure 1 119909
1
1199092
119909119899
isthe input vector 119910
1
1199102
119910119897
is the predicted output 120596119894119895
and120596119896119895
are the weights connecting every layer and ℎ119895
is motherwavelet function
For the input signal sequence 119909 = (1199091
1199092
119909119899
) theoutput of the hidden layer is calculated as
ℎ (119895) = ℎ119895
[
sum119899
119894=1
120596119894119895
119909119894
minus 119887119895
119886119895
] 119895 = 1 2 119898 (5)
where ℎ(119895) is output value for the node 119895 in the hidden layerℎ119895
is themotherwavelet function120596119894119895
is weight connecting theinput layer and hidden layer 119887
119895
is the shift factor and 119886119895
is thestretch factor for ℎ
119895
Currently the choice of mother wavelet functions has not
yet formed a standard theory commonly used wavelet func-tions are Morlet Haar Daubechies (dbN) Symlet (symN)Meryer Coiflet Biorthogonal wavelets and so on
The output of the output layer is calculated as
119910 (119896) =
119898
sum
119894=1
120596119894119896
ℎ (119894) 119896 = 1 2 119897 (6)
The Scientific World Journal 3
2Hidden layer
3Output layer
1Input layer
ℎ1
119909119899
1199091
119910119898
1199101
1199092
ℎ119895
ℎ2
Figure 1 Topology of wavelet neural network
where ℎ(119894) is the output value for node 119894 in the hidden layer120596119894119896
is weight connecting the hidden layer and output layer119897 and 119898 are the number of nodes for output layer and thehidden layer respectively
ForWNN the updating weight algorithm is similar to BPnetwork the gradient method is used to update motherwavelet function parameters and connection weightsbetween the layers making the prediction output closer andcloser to the desired output The weights of WNN and theparameters of wavelet function are updated as follows
(1) Calculating the prediction error of WNN
119890 =
119898
sum
119896=1
119910119899 (119896) minus 119910 (119896) (7)
where 119910(119896) is the predicted output value 119910119899(119896) is the expect-ed output value for the network
(2) Updating the weights of WNN and the parameters ofwavelet function according to the prediction error 119890
120596(119894+1)
119899119896
= 120596(119894)
119899119896
+ Δ120596(119894+1)
119899119896
119886(119894+1)
119896
= 119886(119894)
119896
+ Δ119886(119894+1)
119896
119887(119894+1)
119896
= 119887(119894)
119896
+ Δ119887(119894+1)
119896
(8)
where Δ120596(119894+1)119899119896
Δ119886(119894+1)
119896
and Δ119887(119894+1)119896
are calculated by the net-work prediction error
Δ120596(119894+1)
119899119896
= minus120578120597119890
120597120596(119894)
119899119896
Δ119886(119894+1)
119896
= minus120578120597119890
120597119886(119894)
119896
Δ119887(119894+1)
119896
= minus120578120597119890
120597119887(119894)
119896
(9)
where 120578 is the learning rateThe process of training WNN is as follows
(1) Data preprocessing first the original data is quanti-fied and normalized and then the data is divided intotraining set and testing set for network training andtesting respectively
(2) Initializing WNN connection weights 120596119894119895
and 120596119895119896
translation factor 119887
119896
and scale factor 119886119896
are randomlyinitialized and the learning rate 120578 is set
(3) Training network input the training set into WNNcompute network predicted output values and calcu-late the error 119890between output and the expected value
(4) Updating the weights update mother wavelet func-tion parameters and networkweights according to theprediction error 119890 making the predictive value of thenetwork as close to actual values
(5) If the results satisfy the given conditions use the test-ing set to test the network otherwise return to Step 3
23 MWFWNN MWFWNN will be provided in this sec-tion
231 MWFWNN Algorithm
Step 1 Initializing initialize mother wavelet function librarywaveFunction = waveFunction
119894
119894 = 1 2 119870 119870 =
||waveFunction|| is the number of elements waveFunctionincluded the variance 120575
119894
for mother wavelet functionswaveFunction
119894
and optimal wavelet functionwaveFunctionbest = waveFunction
1
and its variance 120575best =1205751
Step 2 Choosing the best mother wavelet function for eachmother wavelet function
(i) Update the weights and parameters of wavelet func-tion waveFunction
119894
according to (7)ndash(9)(ii) if 120575
119894
lt 120575best
120575best = 120575119894
waveFunctionbest = waveFunction119894(10)
End
Step 3 Constructing MWFWNN using waveFunctionbest asmother wavelet function
Step 4 Testing constructed MWFWNN network in Step 3using the testing set
Step 5 Analyzing results
3 Target Threat Assessment Using MWFWNN
Strictly speaking threat assessment is an NP-hard problembelonging to the third level in the JDL information fusionmodel Target threat assessment needs to consider manyfactors (such as geography weather enemy etc) and therelation among the various factors is not a simple linear com-bination and it is difficult to determine a function betweenthe target threat value and various factors Thereforewe must consider various factors and their relationshipswhen studying the threat assessment However we consider
4 The Scientific World Journal
the following six factors in general target type target speedtarget heading angle target height and target distance Wewill test the performance of MWFWNN using these factorsin this paper
31 Target Threat Assessment Factor Wemainly consider thefollowing six key factors when studying the target threatassessment in the paper
(1) Target Type large targets (such as fighter-bombers)small targets (such as stealth aircraft cruise missiles)and helicopters
(2) Target heading angle such as 22∘ 26∘ and 6∘(3) Target speed such as 100ms 500ms and 220ms(4) Target height such as very low low medium and
high(5) Target interference such as strong medium and
weak(6) Target distance such as 100 km 110 km and 220 km
32 Threat Assessment Model Using MWFWNN We designMWFWNN model according to the data characteristicsBecause the data is 6-dimensional and the output is 1-dimen-sional the structure of WNN is 6-12-1 Firstly we input sixindicators that are the target type target speed target headingangle target interference target height and the distance tothe input layer The hidden layer nodes are formed by thewavelet function and the output layer outputs predicted tar-get threat assessment value under the current indicators Onthe basis of the above analysis we construct the target threatassessment model based on MWFWNN with these sixselected indicators and its architecture is shown in Figure 2
4 Model Simulation
In this section we will test the target threat assessment modelusing MWFWNN proposed in Section 3
41 Data Preprocessing Part of the data used in our work isshown in Table 1 Target value is quantified using G A Mil-lerrsquos quantitative theory which represents the degree of threatextremely small very small little small small medium largelittle large very large and extremely large The properties ofthe specific quantitative criteria are quantified as follows
(1) Target Type helicopter large target (such as fighter-bombers) and small targets (such as stealth aircraftcruise missiles) are quantified by 3 5 and 8 respec-tively
(2) Target interference strongmediumweak and no arequantified by 8 6 4 and 2 respectively
(3) Target height very low low medium and high arequantified 8 6 4 and 2 respectively
(4) Target speed 0mssim1800ms equal interval(200ms) is quantified by 9 to 1
ℎ1
Height 1199095
Type 1199091
Speed 1199092ℎ2
Distance 1199096
Heading angle 1199093
Interference 1199094ℎ12
119910
2Hidden layer
3Output layer
1Input layer
Figure 2 Architecture for the model of target threat assessmentbased on MWFWNN
(5) Target heading angle 0∘sim36∘ equal interval (4∘) isquantified by 9 to 1
(6) Target distance 0 kmsim450 km equal interval (50 km)is quantified by 9 to 1
(7) Determining the target output firstly normalize thevarious factors from air combat situation and then putthem into the WMF WNN proposed in Section 32At last the WMF WNN outputs threatening assess-ment value
After quantifying the data we can normalize the trainingset and testing set using the following expression
119891 119909 997888rarr 119910 =119909 minus 119909min119909max minus 119909min
(11)
where 119909 119910 isin 119877119899 119909min = min(119909) 119909max = max(119909) The valuesare converted into the range [0 1] through the normalizationthat is 119909max isin [0 1] 119894 = 1 2 119899
42 Analysis of Simulation Results In this paper we imple-ment the MWFWNN algorithm by MATLAB R2009a withthe CPU Pentium (R) 4 306GHz 1 G memory (2 lowast 512M)The network prediction is compared with WNN PSO SVMand BP neural network The results show that the proposednetwork is superior to the WNN PSO SVM and BP neuralnetwork
421 Creating the Mother Wavelet Function Library Becausemany mother wavelet functions have no specific expressionwe cannot work out their derivativesTherefore in this workwe only use the following seven mother wavelet functions tobulid a librarywaveFuncitonTheir expressions are as follows
(1) Haar wavelet function
Ψ (119905) =
1 0 le 119905 le1
2
minus11
2le 119905 le 1
0 other
(12)
The Scientific World Journal 5
Table 1 Part of data
No Type Velocity (ms) Heading angle (∘) Inference Height Distance (km) Threat value1 Large 450 8 Medium Low 300 058432 Large 400 3 Strong High 100 057073 Large 450 16 Medium Low 200 053334 Large 800 4 Strong High 100 068955 Large 800 12 Strong Low 320 068966 Small 530 6 Strong Medium 230 060567 Small 650 8 Strong Medium 200 074258 Small 700 12 Strong Low 320 073369 Small 750 15 Medium Very low 400 0754110 Small 640 18 Strong Medium 280 0676411 Helicopter 90 12 Weak Very low 320 0393712 Helicopter 110 3 No Medium 100 0392713 Helicopter 100 9 No Medium 260 0335114 Helicopter 120 15 No Low 160 0358615 Helicopter 80 6 Weak High 180 03471
Table 2 Predicting results of different mother wavelet function
Wavelet function Haar Gaussian Morlet Mexihat Shannon Meyer GGWMSE 210 times 10minus2 103 times 1054 123 times 10minus3 127 times 10minus3 111 times 1055 390 times 1057 160 times 10minus2
Running time (s) 474 476 485 488 484 524 488
(2) Gaussian wavelet function
Ψ (119905) =119905
radic2120587
exp(minus1199052
2) (13)
(3) Morlet wavelet function
Ψ (119905) = cos (175119905) exp(minus1199052
2) (14)
(4) Mexican Hat (Mexihat) wavelet function
119888 =2
radic3
120587minus14
Ψ (119905) = 119888 (1 minus 1199052
) exp(minus1199052
2)
(15)
(5) Shannon wavelet function
Ψ (119905) =sin120587 (119905 minus 12) minus sin 2120587 (119905 minus 12)
120587 (119905 minus 12) (16)
(6) Meyer wavelet function (approximate formula)
Ψ (119905) = 351199054
minus 841199055
+ 701199056
minus 201199057
(17)
(7) Wavelet function GGW constructed by the authors
Ψ (119905) = sin (3119905) + sin (03119905) + sin (003119905) (18)
We use the following parameters to initialize the networkthe input layer nodes119872 = 6 the hidden layer nodes 119899 = 12the output nodes 119873 = 1 and parameter learning rates 119897119903
1
=
001 and 1198971199032
= 0001 respectivelyWe construct wavelet neural network using each wavelet
function in wavelet function library and then input trainingset into network and train the network The MSEs are as fol-lows (from small to large) Morlet ltMexihat lt GGW ltHaarlt Gaussian lt Shannon lt Meyer as shown in Table 2 FromTable 2 we can draw the conclusion that running time andMSE ofMorlet andMexihat are slightly different but theMSEof Gaussian Shannon and Meyer is extremely great com-pletely divorced from reality The results we got in this paperare consistent with the fact that most papers adopt Mexihator Morlet as mother wavelet functions to construct waveletneural network So we can use wavelet functions Morletor Mexihat as mother wavelet function to create MWFWNNnetwork In this paper we use the Morlet wavelet function asthe basic mother wavelet function to construct MWFWNN
422 Comparing with WNN and PSO SVM BP Similar toMWFWNNWNN BPnetwork and support vectormachine(SVM) can be used to solve the target assessment As thechoice of the support vector machine parameters 119888 and 119892 hasno uniform standard except relying on experience in thispaper we use particle swarm optimization (PSO) algorithmto optimize the SVM parameters 119888 and 119892 Next we usewavelet neural network BP neural network and PSO SVMto solve threat assessment and the results are compared withMWFWNN
The structure of wavelet neural network and BP neuralnetwork is 6-12-1 according to the characteristics of the data
6 The Scientific World Journal
Table 3 Predicting results of BP PSO SVM MWFWNN and WNN
Neural network BP PSO SVM MWFWNN WNNMSE 939 times 10minus3 480 times 10minus3 123 times 10minus3 401 times 10minus3
Running time (s) 186 106 488 510
1
04
05
06
07
08
09
0 5 10 15Sample
BP
WNN
MWFWNNActual value
BP PSO_SVM WNN and MWFWNN prediction results
PSO_SVM
Thre
atas
sess
men
tval
ue
Figure 3 Result of target threat assessment based on WNN BPMWFWNN and PSO SVM
used Where the WNN and other parameters are setting asshown in Section 421 PSO SVM uses LIBSVM toolbox [11]whose default is C-SVR and RBF kernel function where 119862 =1 120574 = 1119899 By default PSO local search 119888
1
= 15 the globalsearch 119888
2
= 17 themaximumevolution timesmaxgen = 200the maximum population size sizepop = 20 the rate 119896 = 06between 119881 and 119883 (119896 isin [01 10] 119881 = 119896119883) SVM parameter119888 isin [01 100] and the parameter 119892 isin [001 1000]
Threat assessment is predicted by the trained WNN BPnetwork and PSO SVM and their MSE are 401 times 10minus3 939 times10minus3and 480 times 10minus3 all of which are greater thanMWFWNN(123 times 10minus3) (as shown in Table 3) For the running time dueto BP neural network calling MATLAB embedded functionsrunning time is least wavelet neural networks need to calleach mother wavelet function and its derivative so it is timeto consume more MWFWNN network needs to find theoptimal wavelet function from the library so the runningtime ismore thanWNN the PSO SVMcall PSO algorithm tooptimize parameter 119888 and 119892 but PSO optimization algorithmis more complex so it implements most slowly
Prediction error of BP PSO SVM WNN andMWFWNN is shown in Figure 3 The figure shows that pre-diction errors of WNN BP and PSO SVM are simlilar Theerror is less than the true value at sample 1ndash10 while theerror was significantly greater than the true value at sample11ndash15 MWFWNN network error has similar trend but the
error was significantly less than the WNN BP network andPSO SVM prediction value closer to the expectations
5 Conclusion
Based on requirements for quickly processing information inthe modern information war aiming to the characteristics ofthreat assessment in data fusion functional model we adoptMWFWNN to solve the threat assessment under the com-prehensive consideration of various factors which influencethe threat degree After constructing wavelet function librarywith 7 wavelet functions we get the best performance waveletfunction Morlet as mother wavelet function to constructMWFWNN network and its result is compared with theWNN BP and PSO SVM networks Simulation results showthat the MSE of MWFWNN is 123 times 10minus3 which is farbetter than WNN (401 times 10minus3) BP network (939 times 10minus3)and PSO SVM (480 times 10minus3) achieving the desired goal Inour future work we will further expand the scale of waveletfunction library to find more suitable wavelet function tosolve the threat assessment and other problems
References
[1] F Johansson and G Falkman ldquoA Bayesian network approach tothreat evaluation with application to an air defense scenariordquo inProceedings of the 11th International Conference on InformationFusion (FUSION rsquo08) pp 1ndash7 July 2008
[2] J Yang W Y Gao and J Liu ldquoThreat assessment method basedon Bayesian networkrdquo Journal of PLA University of Science andTechnology vol 11 no 1 pp 43ndash48 2010
[3] J Li and L Han ldquoThreat ordering of interference targets basedon uncertain multi-attribute decision-makingrdquo Telecommuni-cation Engineering vol 49 no 10 pp 62ndash64 2009
[4] G Wang L Guo H Duan L Liu and H Wang ldquoTarget threatassessment using glowworm swarmoptimization andBPneuralnetworkrdquo Journal of Jilin University In press
[5] G Wang L Guo H Duan L Liu and H Wang ldquoThe modeland algorithm for the target threat assessment based on ElmanAdaBoost strong predictorrdquo Acta Electronica Sinica vol 40 no5 pp 901ndash906 2012
[6] X D Gu Z X Tong S J Chai XW Yuan and Y L Lu ldquoTargetthreat assessment based on TOPSIS combined by IAHP and themaximal deviationrdquo Journal of Air Force Engineering Universityvol 12 no 2 pp 27ndash31 2011
[7] K Sycara R Glinton B Yu et al ldquoAn integrated approach tohigh-level information fusionrdquo Information Fusion vol 10 no1 pp 25ndash50 2009
[8] A Kott R Singh W M McEneaney and W Milks ldquoHypo-thesis-driven information fusion in adversarial deceptive envi-ronmentsrdquo Information Fusion vol 12 no 2 pp 131ndash144 2011
[9] A Grossmann and J Morlet ldquoDecomposition of Hardy func-tions into square integrable wavelets of constant shaperdquo SIAM
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
2 The Scientific World Journal
data increasing the convergence of BP slows down and thenetwork performance deteriorates moreover in the trainingprocess it is easy to fall into local minimum solutionWaveletneural network (WNN) has many advantages compared withother neural networks for example the parameters (hiddennodes and weight) aremore easily determined than the radialbasis function (RBF) neural networks it requires smallertraining amount than multilayer perceptron network alsowavelet neural network has a fast convergence In the sameapproximation quality the wavelet neural network requiresfewer nodes For WNN one of the biggest drawbacks is thedifficulty of the choice of mother wavelet function so thispaper proposes an algorithm for selecting optimal motherwavelet function And thenwe construct the (MultipleWave-let FunctionWaveletNeuralNetworks)MWFWNNusing theabove method to select optimal mother wavelet function forthreat assessment
2 MWFWNN Network
21 Wavelet Theory Firstly proposed by Grossman andMorlet in the 1980s wavelet theory [9] is a mathematicaltheory and analysis method to make up the shortages ofFourier transform In the field of signal processing the mostwidely used analysis method is the Fourier transform butit has obvious deficiency that the Fourier transform hasno distinguishable ability in the time domain because thetime information is not included in the results of Fouriertransform Wavelet is special waveform with the mean 0 andthe limited length
Wavelet function is constructed through a series of basictransformation with a mother wavelet function Not all fun-ctions can be used as wavelet mother function if a waveletfunction is to be available and then develop into a good wave-let transform function it must satisfy many conditionsTherefore it is difficult to find the practical wavelet functionIn the practical wavelet functions some of them do not haveexpressions
Let 120593(119905) be a square integrable function that is 120593(119905) isin1198712
(119877) If its Fourier transform Ψ(120596) can satisfy the followingcompatibility condition
int
119877
|Ψ (120596)|2
120596d120596 lt infin (1)
then 120593(119905) is called a basic wavelet or mother wavelet functionWe make translation and scale for wavelet function thetranslation factor 120591 and the scale factor (also known as theexpansion factor) 119886 so that we get function 120593
119886120591
(119905)
120593119886120591
(119905) = 11988612
Ψ(119905 minus 120591
119886) 119886 gt 0 120591 isin 119877 (2)
As the translation factor 120591 and the scale factor 119886 are con-tinuous variables their value can be positive or negative soΨ119886120591
(120596) is called continuous wavelet function (also called themother wavelet function)
Wavelet transform calculates the inner product betweenthe signal 119909(119905) with mother wavelet function
119891119909
(119886 120591) =1
radic119886int
infin
minusinfin
119909 (119905) 120593lowast
(119905 minus 120591
119886) d119905 (3)
Equivalent expression in time domain is given as
119891119909
(119886 120591) =radic119886
2120587int
infin
minusinfin
119883 (120596)Ψlowast
(119886120596) 119890119895120596120591d120596 (4)
where 119886 gt 0 120591 isin 119877 119883(120596) and Ψ(120596) are the Fourier trans-form of 119909(119905) and 120593(119905) respectively
The conclusion can be drawn from (3) and (4) that iswavelet analysis can analyze the local characteristics of thesignal through the mother wavelet function transformationtherefore wavelet theory is considered to be the break-through for the Fourier transform and the theory has beensuccessfully applied to image processing optical devicesdetection and signal analysis and other fields
22 Wavelet Neural Network Wavelet transform has time-frequency localization property and focal features and neuralnetwork (NN) has self-adaptive fault tolerance robustnessand strong inference ability How to combine the advantagesof wavelet transform and NN to solve practical problemshas been one of the hot spots So-called wavelet neuralnetwork (WNN) or wavelet network (WN) is a variety oftwo techniques and inherits the advantages of the neuralnetwork and wavelet transformation Proposed by Q Zhangin 1992 [10] WNN uses the wavelet function as the activationfunction instead of the Sigmoid activation function
ForWNN its topology is based on BP network the trans-fer function of hidden layer nodes is themother wavelet func-tion and the network signal is prior to transmission whileerror is backpropagation in the training processThe networktopology is shown in Figure 1 In Figure 1 119909
1
1199092
119909119899
isthe input vector 119910
1
1199102
119910119897
is the predicted output 120596119894119895
and120596119896119895
are the weights connecting every layer and ℎ119895
is motherwavelet function
For the input signal sequence 119909 = (1199091
1199092
119909119899
) theoutput of the hidden layer is calculated as
ℎ (119895) = ℎ119895
[
sum119899
119894=1
120596119894119895
119909119894
minus 119887119895
119886119895
] 119895 = 1 2 119898 (5)
where ℎ(119895) is output value for the node 119895 in the hidden layerℎ119895
is themotherwavelet function120596119894119895
is weight connecting theinput layer and hidden layer 119887
119895
is the shift factor and 119886119895
is thestretch factor for ℎ
119895
Currently the choice of mother wavelet functions has not
yet formed a standard theory commonly used wavelet func-tions are Morlet Haar Daubechies (dbN) Symlet (symN)Meryer Coiflet Biorthogonal wavelets and so on
The output of the output layer is calculated as
119910 (119896) =
119898
sum
119894=1
120596119894119896
ℎ (119894) 119896 = 1 2 119897 (6)
The Scientific World Journal 3
2Hidden layer
3Output layer
1Input layer
ℎ1
119909119899
1199091
119910119898
1199101
1199092
ℎ119895
ℎ2
Figure 1 Topology of wavelet neural network
where ℎ(119894) is the output value for node 119894 in the hidden layer120596119894119896
is weight connecting the hidden layer and output layer119897 and 119898 are the number of nodes for output layer and thehidden layer respectively
ForWNN the updating weight algorithm is similar to BPnetwork the gradient method is used to update motherwavelet function parameters and connection weightsbetween the layers making the prediction output closer andcloser to the desired output The weights of WNN and theparameters of wavelet function are updated as follows
(1) Calculating the prediction error of WNN
119890 =
119898
sum
119896=1
119910119899 (119896) minus 119910 (119896) (7)
where 119910(119896) is the predicted output value 119910119899(119896) is the expect-ed output value for the network
(2) Updating the weights of WNN and the parameters ofwavelet function according to the prediction error 119890
120596(119894+1)
119899119896
= 120596(119894)
119899119896
+ Δ120596(119894+1)
119899119896
119886(119894+1)
119896
= 119886(119894)
119896
+ Δ119886(119894+1)
119896
119887(119894+1)
119896
= 119887(119894)
119896
+ Δ119887(119894+1)
119896
(8)
where Δ120596(119894+1)119899119896
Δ119886(119894+1)
119896
and Δ119887(119894+1)119896
are calculated by the net-work prediction error
Δ120596(119894+1)
119899119896
= minus120578120597119890
120597120596(119894)
119899119896
Δ119886(119894+1)
119896
= minus120578120597119890
120597119886(119894)
119896
Δ119887(119894+1)
119896
= minus120578120597119890
120597119887(119894)
119896
(9)
where 120578 is the learning rateThe process of training WNN is as follows
(1) Data preprocessing first the original data is quanti-fied and normalized and then the data is divided intotraining set and testing set for network training andtesting respectively
(2) Initializing WNN connection weights 120596119894119895
and 120596119895119896
translation factor 119887
119896
and scale factor 119886119896
are randomlyinitialized and the learning rate 120578 is set
(3) Training network input the training set into WNNcompute network predicted output values and calcu-late the error 119890between output and the expected value
(4) Updating the weights update mother wavelet func-tion parameters and networkweights according to theprediction error 119890 making the predictive value of thenetwork as close to actual values
(5) If the results satisfy the given conditions use the test-ing set to test the network otherwise return to Step 3
23 MWFWNN MWFWNN will be provided in this sec-tion
231 MWFWNN Algorithm
Step 1 Initializing initialize mother wavelet function librarywaveFunction = waveFunction
119894
119894 = 1 2 119870 119870 =
||waveFunction|| is the number of elements waveFunctionincluded the variance 120575
119894
for mother wavelet functionswaveFunction
119894
and optimal wavelet functionwaveFunctionbest = waveFunction
1
and its variance 120575best =1205751
Step 2 Choosing the best mother wavelet function for eachmother wavelet function
(i) Update the weights and parameters of wavelet func-tion waveFunction
119894
according to (7)ndash(9)(ii) if 120575
119894
lt 120575best
120575best = 120575119894
waveFunctionbest = waveFunction119894(10)
End
Step 3 Constructing MWFWNN using waveFunctionbest asmother wavelet function
Step 4 Testing constructed MWFWNN network in Step 3using the testing set
Step 5 Analyzing results
3 Target Threat Assessment Using MWFWNN
Strictly speaking threat assessment is an NP-hard problembelonging to the third level in the JDL information fusionmodel Target threat assessment needs to consider manyfactors (such as geography weather enemy etc) and therelation among the various factors is not a simple linear com-bination and it is difficult to determine a function betweenthe target threat value and various factors Thereforewe must consider various factors and their relationshipswhen studying the threat assessment However we consider
4 The Scientific World Journal
the following six factors in general target type target speedtarget heading angle target height and target distance Wewill test the performance of MWFWNN using these factorsin this paper
31 Target Threat Assessment Factor Wemainly consider thefollowing six key factors when studying the target threatassessment in the paper
(1) Target Type large targets (such as fighter-bombers)small targets (such as stealth aircraft cruise missiles)and helicopters
(2) Target heading angle such as 22∘ 26∘ and 6∘(3) Target speed such as 100ms 500ms and 220ms(4) Target height such as very low low medium and
high(5) Target interference such as strong medium and
weak(6) Target distance such as 100 km 110 km and 220 km
32 Threat Assessment Model Using MWFWNN We designMWFWNN model according to the data characteristicsBecause the data is 6-dimensional and the output is 1-dimen-sional the structure of WNN is 6-12-1 Firstly we input sixindicators that are the target type target speed target headingangle target interference target height and the distance tothe input layer The hidden layer nodes are formed by thewavelet function and the output layer outputs predicted tar-get threat assessment value under the current indicators Onthe basis of the above analysis we construct the target threatassessment model based on MWFWNN with these sixselected indicators and its architecture is shown in Figure 2
4 Model Simulation
In this section we will test the target threat assessment modelusing MWFWNN proposed in Section 3
41 Data Preprocessing Part of the data used in our work isshown in Table 1 Target value is quantified using G A Mil-lerrsquos quantitative theory which represents the degree of threatextremely small very small little small small medium largelittle large very large and extremely large The properties ofthe specific quantitative criteria are quantified as follows
(1) Target Type helicopter large target (such as fighter-bombers) and small targets (such as stealth aircraftcruise missiles) are quantified by 3 5 and 8 respec-tively
(2) Target interference strongmediumweak and no arequantified by 8 6 4 and 2 respectively
(3) Target height very low low medium and high arequantified 8 6 4 and 2 respectively
(4) Target speed 0mssim1800ms equal interval(200ms) is quantified by 9 to 1
ℎ1
Height 1199095
Type 1199091
Speed 1199092ℎ2
Distance 1199096
Heading angle 1199093
Interference 1199094ℎ12
119910
2Hidden layer
3Output layer
1Input layer
Figure 2 Architecture for the model of target threat assessmentbased on MWFWNN
(5) Target heading angle 0∘sim36∘ equal interval (4∘) isquantified by 9 to 1
(6) Target distance 0 kmsim450 km equal interval (50 km)is quantified by 9 to 1
(7) Determining the target output firstly normalize thevarious factors from air combat situation and then putthem into the WMF WNN proposed in Section 32At last the WMF WNN outputs threatening assess-ment value
After quantifying the data we can normalize the trainingset and testing set using the following expression
119891 119909 997888rarr 119910 =119909 minus 119909min119909max minus 119909min
(11)
where 119909 119910 isin 119877119899 119909min = min(119909) 119909max = max(119909) The valuesare converted into the range [0 1] through the normalizationthat is 119909max isin [0 1] 119894 = 1 2 119899
42 Analysis of Simulation Results In this paper we imple-ment the MWFWNN algorithm by MATLAB R2009a withthe CPU Pentium (R) 4 306GHz 1 G memory (2 lowast 512M)The network prediction is compared with WNN PSO SVMand BP neural network The results show that the proposednetwork is superior to the WNN PSO SVM and BP neuralnetwork
421 Creating the Mother Wavelet Function Library Becausemany mother wavelet functions have no specific expressionwe cannot work out their derivativesTherefore in this workwe only use the following seven mother wavelet functions tobulid a librarywaveFuncitonTheir expressions are as follows
(1) Haar wavelet function
Ψ (119905) =
1 0 le 119905 le1
2
minus11
2le 119905 le 1
0 other
(12)
The Scientific World Journal 5
Table 1 Part of data
No Type Velocity (ms) Heading angle (∘) Inference Height Distance (km) Threat value1 Large 450 8 Medium Low 300 058432 Large 400 3 Strong High 100 057073 Large 450 16 Medium Low 200 053334 Large 800 4 Strong High 100 068955 Large 800 12 Strong Low 320 068966 Small 530 6 Strong Medium 230 060567 Small 650 8 Strong Medium 200 074258 Small 700 12 Strong Low 320 073369 Small 750 15 Medium Very low 400 0754110 Small 640 18 Strong Medium 280 0676411 Helicopter 90 12 Weak Very low 320 0393712 Helicopter 110 3 No Medium 100 0392713 Helicopter 100 9 No Medium 260 0335114 Helicopter 120 15 No Low 160 0358615 Helicopter 80 6 Weak High 180 03471
Table 2 Predicting results of different mother wavelet function
Wavelet function Haar Gaussian Morlet Mexihat Shannon Meyer GGWMSE 210 times 10minus2 103 times 1054 123 times 10minus3 127 times 10minus3 111 times 1055 390 times 1057 160 times 10minus2
Running time (s) 474 476 485 488 484 524 488
(2) Gaussian wavelet function
Ψ (119905) =119905
radic2120587
exp(minus1199052
2) (13)
(3) Morlet wavelet function
Ψ (119905) = cos (175119905) exp(minus1199052
2) (14)
(4) Mexican Hat (Mexihat) wavelet function
119888 =2
radic3
120587minus14
Ψ (119905) = 119888 (1 minus 1199052
) exp(minus1199052
2)
(15)
(5) Shannon wavelet function
Ψ (119905) =sin120587 (119905 minus 12) minus sin 2120587 (119905 minus 12)
120587 (119905 minus 12) (16)
(6) Meyer wavelet function (approximate formula)
Ψ (119905) = 351199054
minus 841199055
+ 701199056
minus 201199057
(17)
(7) Wavelet function GGW constructed by the authors
Ψ (119905) = sin (3119905) + sin (03119905) + sin (003119905) (18)
We use the following parameters to initialize the networkthe input layer nodes119872 = 6 the hidden layer nodes 119899 = 12the output nodes 119873 = 1 and parameter learning rates 119897119903
1
=
001 and 1198971199032
= 0001 respectivelyWe construct wavelet neural network using each wavelet
function in wavelet function library and then input trainingset into network and train the network The MSEs are as fol-lows (from small to large) Morlet ltMexihat lt GGW ltHaarlt Gaussian lt Shannon lt Meyer as shown in Table 2 FromTable 2 we can draw the conclusion that running time andMSE ofMorlet andMexihat are slightly different but theMSEof Gaussian Shannon and Meyer is extremely great com-pletely divorced from reality The results we got in this paperare consistent with the fact that most papers adopt Mexihator Morlet as mother wavelet functions to construct waveletneural network So we can use wavelet functions Morletor Mexihat as mother wavelet function to create MWFWNNnetwork In this paper we use the Morlet wavelet function asthe basic mother wavelet function to construct MWFWNN
422 Comparing with WNN and PSO SVM BP Similar toMWFWNNWNN BPnetwork and support vectormachine(SVM) can be used to solve the target assessment As thechoice of the support vector machine parameters 119888 and 119892 hasno uniform standard except relying on experience in thispaper we use particle swarm optimization (PSO) algorithmto optimize the SVM parameters 119888 and 119892 Next we usewavelet neural network BP neural network and PSO SVMto solve threat assessment and the results are compared withMWFWNN
The structure of wavelet neural network and BP neuralnetwork is 6-12-1 according to the characteristics of the data
6 The Scientific World Journal
Table 3 Predicting results of BP PSO SVM MWFWNN and WNN
Neural network BP PSO SVM MWFWNN WNNMSE 939 times 10minus3 480 times 10minus3 123 times 10minus3 401 times 10minus3
Running time (s) 186 106 488 510
1
04
05
06
07
08
09
0 5 10 15Sample
BP
WNN
MWFWNNActual value
BP PSO_SVM WNN and MWFWNN prediction results
PSO_SVM
Thre
atas
sess
men
tval
ue
Figure 3 Result of target threat assessment based on WNN BPMWFWNN and PSO SVM
used Where the WNN and other parameters are setting asshown in Section 421 PSO SVM uses LIBSVM toolbox [11]whose default is C-SVR and RBF kernel function where 119862 =1 120574 = 1119899 By default PSO local search 119888
1
= 15 the globalsearch 119888
2
= 17 themaximumevolution timesmaxgen = 200the maximum population size sizepop = 20 the rate 119896 = 06between 119881 and 119883 (119896 isin [01 10] 119881 = 119896119883) SVM parameter119888 isin [01 100] and the parameter 119892 isin [001 1000]
Threat assessment is predicted by the trained WNN BPnetwork and PSO SVM and their MSE are 401 times 10minus3 939 times10minus3and 480 times 10minus3 all of which are greater thanMWFWNN(123 times 10minus3) (as shown in Table 3) For the running time dueto BP neural network calling MATLAB embedded functionsrunning time is least wavelet neural networks need to calleach mother wavelet function and its derivative so it is timeto consume more MWFWNN network needs to find theoptimal wavelet function from the library so the runningtime ismore thanWNN the PSO SVMcall PSO algorithm tooptimize parameter 119888 and 119892 but PSO optimization algorithmis more complex so it implements most slowly
Prediction error of BP PSO SVM WNN andMWFWNN is shown in Figure 3 The figure shows that pre-diction errors of WNN BP and PSO SVM are simlilar Theerror is less than the true value at sample 1ndash10 while theerror was significantly greater than the true value at sample11ndash15 MWFWNN network error has similar trend but the
error was significantly less than the WNN BP network andPSO SVM prediction value closer to the expectations
5 Conclusion
Based on requirements for quickly processing information inthe modern information war aiming to the characteristics ofthreat assessment in data fusion functional model we adoptMWFWNN to solve the threat assessment under the com-prehensive consideration of various factors which influencethe threat degree After constructing wavelet function librarywith 7 wavelet functions we get the best performance waveletfunction Morlet as mother wavelet function to constructMWFWNN network and its result is compared with theWNN BP and PSO SVM networks Simulation results showthat the MSE of MWFWNN is 123 times 10minus3 which is farbetter than WNN (401 times 10minus3) BP network (939 times 10minus3)and PSO SVM (480 times 10minus3) achieving the desired goal Inour future work we will further expand the scale of waveletfunction library to find more suitable wavelet function tosolve the threat assessment and other problems
References
[1] F Johansson and G Falkman ldquoA Bayesian network approach tothreat evaluation with application to an air defense scenariordquo inProceedings of the 11th International Conference on InformationFusion (FUSION rsquo08) pp 1ndash7 July 2008
[2] J Yang W Y Gao and J Liu ldquoThreat assessment method basedon Bayesian networkrdquo Journal of PLA University of Science andTechnology vol 11 no 1 pp 43ndash48 2010
[3] J Li and L Han ldquoThreat ordering of interference targets basedon uncertain multi-attribute decision-makingrdquo Telecommuni-cation Engineering vol 49 no 10 pp 62ndash64 2009
[4] G Wang L Guo H Duan L Liu and H Wang ldquoTarget threatassessment using glowworm swarmoptimization andBPneuralnetworkrdquo Journal of Jilin University In press
[5] G Wang L Guo H Duan L Liu and H Wang ldquoThe modeland algorithm for the target threat assessment based on ElmanAdaBoost strong predictorrdquo Acta Electronica Sinica vol 40 no5 pp 901ndash906 2012
[6] X D Gu Z X Tong S J Chai XW Yuan and Y L Lu ldquoTargetthreat assessment based on TOPSIS combined by IAHP and themaximal deviationrdquo Journal of Air Force Engineering Universityvol 12 no 2 pp 27ndash31 2011
[7] K Sycara R Glinton B Yu et al ldquoAn integrated approach tohigh-level information fusionrdquo Information Fusion vol 10 no1 pp 25ndash50 2009
[8] A Kott R Singh W M McEneaney and W Milks ldquoHypo-thesis-driven information fusion in adversarial deceptive envi-ronmentsrdquo Information Fusion vol 12 no 2 pp 131ndash144 2011
[9] A Grossmann and J Morlet ldquoDecomposition of Hardy func-tions into square integrable wavelets of constant shaperdquo SIAM
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 3
2Hidden layer
3Output layer
1Input layer
ℎ1
119909119899
1199091
119910119898
1199101
1199092
ℎ119895
ℎ2
Figure 1 Topology of wavelet neural network
where ℎ(119894) is the output value for node 119894 in the hidden layer120596119894119896
is weight connecting the hidden layer and output layer119897 and 119898 are the number of nodes for output layer and thehidden layer respectively
ForWNN the updating weight algorithm is similar to BPnetwork the gradient method is used to update motherwavelet function parameters and connection weightsbetween the layers making the prediction output closer andcloser to the desired output The weights of WNN and theparameters of wavelet function are updated as follows
(1) Calculating the prediction error of WNN
119890 =
119898
sum
119896=1
119910119899 (119896) minus 119910 (119896) (7)
where 119910(119896) is the predicted output value 119910119899(119896) is the expect-ed output value for the network
(2) Updating the weights of WNN and the parameters ofwavelet function according to the prediction error 119890
120596(119894+1)
119899119896
= 120596(119894)
119899119896
+ Δ120596(119894+1)
119899119896
119886(119894+1)
119896
= 119886(119894)
119896
+ Δ119886(119894+1)
119896
119887(119894+1)
119896
= 119887(119894)
119896
+ Δ119887(119894+1)
119896
(8)
where Δ120596(119894+1)119899119896
Δ119886(119894+1)
119896
and Δ119887(119894+1)119896
are calculated by the net-work prediction error
Δ120596(119894+1)
119899119896
= minus120578120597119890
120597120596(119894)
119899119896
Δ119886(119894+1)
119896
= minus120578120597119890
120597119886(119894)
119896
Δ119887(119894+1)
119896
= minus120578120597119890
120597119887(119894)
119896
(9)
where 120578 is the learning rateThe process of training WNN is as follows
(1) Data preprocessing first the original data is quanti-fied and normalized and then the data is divided intotraining set and testing set for network training andtesting respectively
(2) Initializing WNN connection weights 120596119894119895
and 120596119895119896
translation factor 119887
119896
and scale factor 119886119896
are randomlyinitialized and the learning rate 120578 is set
(3) Training network input the training set into WNNcompute network predicted output values and calcu-late the error 119890between output and the expected value
(4) Updating the weights update mother wavelet func-tion parameters and networkweights according to theprediction error 119890 making the predictive value of thenetwork as close to actual values
(5) If the results satisfy the given conditions use the test-ing set to test the network otherwise return to Step 3
23 MWFWNN MWFWNN will be provided in this sec-tion
231 MWFWNN Algorithm
Step 1 Initializing initialize mother wavelet function librarywaveFunction = waveFunction
119894
119894 = 1 2 119870 119870 =
||waveFunction|| is the number of elements waveFunctionincluded the variance 120575
119894
for mother wavelet functionswaveFunction
119894
and optimal wavelet functionwaveFunctionbest = waveFunction
1
and its variance 120575best =1205751
Step 2 Choosing the best mother wavelet function for eachmother wavelet function
(i) Update the weights and parameters of wavelet func-tion waveFunction
119894
according to (7)ndash(9)(ii) if 120575
119894
lt 120575best
120575best = 120575119894
waveFunctionbest = waveFunction119894(10)
End
Step 3 Constructing MWFWNN using waveFunctionbest asmother wavelet function
Step 4 Testing constructed MWFWNN network in Step 3using the testing set
Step 5 Analyzing results
3 Target Threat Assessment Using MWFWNN
Strictly speaking threat assessment is an NP-hard problembelonging to the third level in the JDL information fusionmodel Target threat assessment needs to consider manyfactors (such as geography weather enemy etc) and therelation among the various factors is not a simple linear com-bination and it is difficult to determine a function betweenthe target threat value and various factors Thereforewe must consider various factors and their relationshipswhen studying the threat assessment However we consider
4 The Scientific World Journal
the following six factors in general target type target speedtarget heading angle target height and target distance Wewill test the performance of MWFWNN using these factorsin this paper
31 Target Threat Assessment Factor Wemainly consider thefollowing six key factors when studying the target threatassessment in the paper
(1) Target Type large targets (such as fighter-bombers)small targets (such as stealth aircraft cruise missiles)and helicopters
(2) Target heading angle such as 22∘ 26∘ and 6∘(3) Target speed such as 100ms 500ms and 220ms(4) Target height such as very low low medium and
high(5) Target interference such as strong medium and
weak(6) Target distance such as 100 km 110 km and 220 km
32 Threat Assessment Model Using MWFWNN We designMWFWNN model according to the data characteristicsBecause the data is 6-dimensional and the output is 1-dimen-sional the structure of WNN is 6-12-1 Firstly we input sixindicators that are the target type target speed target headingangle target interference target height and the distance tothe input layer The hidden layer nodes are formed by thewavelet function and the output layer outputs predicted tar-get threat assessment value under the current indicators Onthe basis of the above analysis we construct the target threatassessment model based on MWFWNN with these sixselected indicators and its architecture is shown in Figure 2
4 Model Simulation
In this section we will test the target threat assessment modelusing MWFWNN proposed in Section 3
41 Data Preprocessing Part of the data used in our work isshown in Table 1 Target value is quantified using G A Mil-lerrsquos quantitative theory which represents the degree of threatextremely small very small little small small medium largelittle large very large and extremely large The properties ofthe specific quantitative criteria are quantified as follows
(1) Target Type helicopter large target (such as fighter-bombers) and small targets (such as stealth aircraftcruise missiles) are quantified by 3 5 and 8 respec-tively
(2) Target interference strongmediumweak and no arequantified by 8 6 4 and 2 respectively
(3) Target height very low low medium and high arequantified 8 6 4 and 2 respectively
(4) Target speed 0mssim1800ms equal interval(200ms) is quantified by 9 to 1
ℎ1
Height 1199095
Type 1199091
Speed 1199092ℎ2
Distance 1199096
Heading angle 1199093
Interference 1199094ℎ12
119910
2Hidden layer
3Output layer
1Input layer
Figure 2 Architecture for the model of target threat assessmentbased on MWFWNN
(5) Target heading angle 0∘sim36∘ equal interval (4∘) isquantified by 9 to 1
(6) Target distance 0 kmsim450 km equal interval (50 km)is quantified by 9 to 1
(7) Determining the target output firstly normalize thevarious factors from air combat situation and then putthem into the WMF WNN proposed in Section 32At last the WMF WNN outputs threatening assess-ment value
After quantifying the data we can normalize the trainingset and testing set using the following expression
119891 119909 997888rarr 119910 =119909 minus 119909min119909max minus 119909min
(11)
where 119909 119910 isin 119877119899 119909min = min(119909) 119909max = max(119909) The valuesare converted into the range [0 1] through the normalizationthat is 119909max isin [0 1] 119894 = 1 2 119899
42 Analysis of Simulation Results In this paper we imple-ment the MWFWNN algorithm by MATLAB R2009a withthe CPU Pentium (R) 4 306GHz 1 G memory (2 lowast 512M)The network prediction is compared with WNN PSO SVMand BP neural network The results show that the proposednetwork is superior to the WNN PSO SVM and BP neuralnetwork
421 Creating the Mother Wavelet Function Library Becausemany mother wavelet functions have no specific expressionwe cannot work out their derivativesTherefore in this workwe only use the following seven mother wavelet functions tobulid a librarywaveFuncitonTheir expressions are as follows
(1) Haar wavelet function
Ψ (119905) =
1 0 le 119905 le1
2
minus11
2le 119905 le 1
0 other
(12)
The Scientific World Journal 5
Table 1 Part of data
No Type Velocity (ms) Heading angle (∘) Inference Height Distance (km) Threat value1 Large 450 8 Medium Low 300 058432 Large 400 3 Strong High 100 057073 Large 450 16 Medium Low 200 053334 Large 800 4 Strong High 100 068955 Large 800 12 Strong Low 320 068966 Small 530 6 Strong Medium 230 060567 Small 650 8 Strong Medium 200 074258 Small 700 12 Strong Low 320 073369 Small 750 15 Medium Very low 400 0754110 Small 640 18 Strong Medium 280 0676411 Helicopter 90 12 Weak Very low 320 0393712 Helicopter 110 3 No Medium 100 0392713 Helicopter 100 9 No Medium 260 0335114 Helicopter 120 15 No Low 160 0358615 Helicopter 80 6 Weak High 180 03471
Table 2 Predicting results of different mother wavelet function
Wavelet function Haar Gaussian Morlet Mexihat Shannon Meyer GGWMSE 210 times 10minus2 103 times 1054 123 times 10minus3 127 times 10minus3 111 times 1055 390 times 1057 160 times 10minus2
Running time (s) 474 476 485 488 484 524 488
(2) Gaussian wavelet function
Ψ (119905) =119905
radic2120587
exp(minus1199052
2) (13)
(3) Morlet wavelet function
Ψ (119905) = cos (175119905) exp(minus1199052
2) (14)
(4) Mexican Hat (Mexihat) wavelet function
119888 =2
radic3
120587minus14
Ψ (119905) = 119888 (1 minus 1199052
) exp(minus1199052
2)
(15)
(5) Shannon wavelet function
Ψ (119905) =sin120587 (119905 minus 12) minus sin 2120587 (119905 minus 12)
120587 (119905 minus 12) (16)
(6) Meyer wavelet function (approximate formula)
Ψ (119905) = 351199054
minus 841199055
+ 701199056
minus 201199057
(17)
(7) Wavelet function GGW constructed by the authors
Ψ (119905) = sin (3119905) + sin (03119905) + sin (003119905) (18)
We use the following parameters to initialize the networkthe input layer nodes119872 = 6 the hidden layer nodes 119899 = 12the output nodes 119873 = 1 and parameter learning rates 119897119903
1
=
001 and 1198971199032
= 0001 respectivelyWe construct wavelet neural network using each wavelet
function in wavelet function library and then input trainingset into network and train the network The MSEs are as fol-lows (from small to large) Morlet ltMexihat lt GGW ltHaarlt Gaussian lt Shannon lt Meyer as shown in Table 2 FromTable 2 we can draw the conclusion that running time andMSE ofMorlet andMexihat are slightly different but theMSEof Gaussian Shannon and Meyer is extremely great com-pletely divorced from reality The results we got in this paperare consistent with the fact that most papers adopt Mexihator Morlet as mother wavelet functions to construct waveletneural network So we can use wavelet functions Morletor Mexihat as mother wavelet function to create MWFWNNnetwork In this paper we use the Morlet wavelet function asthe basic mother wavelet function to construct MWFWNN
422 Comparing with WNN and PSO SVM BP Similar toMWFWNNWNN BPnetwork and support vectormachine(SVM) can be used to solve the target assessment As thechoice of the support vector machine parameters 119888 and 119892 hasno uniform standard except relying on experience in thispaper we use particle swarm optimization (PSO) algorithmto optimize the SVM parameters 119888 and 119892 Next we usewavelet neural network BP neural network and PSO SVMto solve threat assessment and the results are compared withMWFWNN
The structure of wavelet neural network and BP neuralnetwork is 6-12-1 according to the characteristics of the data
6 The Scientific World Journal
Table 3 Predicting results of BP PSO SVM MWFWNN and WNN
Neural network BP PSO SVM MWFWNN WNNMSE 939 times 10minus3 480 times 10minus3 123 times 10minus3 401 times 10minus3
Running time (s) 186 106 488 510
1
04
05
06
07
08
09
0 5 10 15Sample
BP
WNN
MWFWNNActual value
BP PSO_SVM WNN and MWFWNN prediction results
PSO_SVM
Thre
atas
sess
men
tval
ue
Figure 3 Result of target threat assessment based on WNN BPMWFWNN and PSO SVM
used Where the WNN and other parameters are setting asshown in Section 421 PSO SVM uses LIBSVM toolbox [11]whose default is C-SVR and RBF kernel function where 119862 =1 120574 = 1119899 By default PSO local search 119888
1
= 15 the globalsearch 119888
2
= 17 themaximumevolution timesmaxgen = 200the maximum population size sizepop = 20 the rate 119896 = 06between 119881 and 119883 (119896 isin [01 10] 119881 = 119896119883) SVM parameter119888 isin [01 100] and the parameter 119892 isin [001 1000]
Threat assessment is predicted by the trained WNN BPnetwork and PSO SVM and their MSE are 401 times 10minus3 939 times10minus3and 480 times 10minus3 all of which are greater thanMWFWNN(123 times 10minus3) (as shown in Table 3) For the running time dueto BP neural network calling MATLAB embedded functionsrunning time is least wavelet neural networks need to calleach mother wavelet function and its derivative so it is timeto consume more MWFWNN network needs to find theoptimal wavelet function from the library so the runningtime ismore thanWNN the PSO SVMcall PSO algorithm tooptimize parameter 119888 and 119892 but PSO optimization algorithmis more complex so it implements most slowly
Prediction error of BP PSO SVM WNN andMWFWNN is shown in Figure 3 The figure shows that pre-diction errors of WNN BP and PSO SVM are simlilar Theerror is less than the true value at sample 1ndash10 while theerror was significantly greater than the true value at sample11ndash15 MWFWNN network error has similar trend but the
error was significantly less than the WNN BP network andPSO SVM prediction value closer to the expectations
5 Conclusion
Based on requirements for quickly processing information inthe modern information war aiming to the characteristics ofthreat assessment in data fusion functional model we adoptMWFWNN to solve the threat assessment under the com-prehensive consideration of various factors which influencethe threat degree After constructing wavelet function librarywith 7 wavelet functions we get the best performance waveletfunction Morlet as mother wavelet function to constructMWFWNN network and its result is compared with theWNN BP and PSO SVM networks Simulation results showthat the MSE of MWFWNN is 123 times 10minus3 which is farbetter than WNN (401 times 10minus3) BP network (939 times 10minus3)and PSO SVM (480 times 10minus3) achieving the desired goal Inour future work we will further expand the scale of waveletfunction library to find more suitable wavelet function tosolve the threat assessment and other problems
References
[1] F Johansson and G Falkman ldquoA Bayesian network approach tothreat evaluation with application to an air defense scenariordquo inProceedings of the 11th International Conference on InformationFusion (FUSION rsquo08) pp 1ndash7 July 2008
[2] J Yang W Y Gao and J Liu ldquoThreat assessment method basedon Bayesian networkrdquo Journal of PLA University of Science andTechnology vol 11 no 1 pp 43ndash48 2010
[3] J Li and L Han ldquoThreat ordering of interference targets basedon uncertain multi-attribute decision-makingrdquo Telecommuni-cation Engineering vol 49 no 10 pp 62ndash64 2009
[4] G Wang L Guo H Duan L Liu and H Wang ldquoTarget threatassessment using glowworm swarmoptimization andBPneuralnetworkrdquo Journal of Jilin University In press
[5] G Wang L Guo H Duan L Liu and H Wang ldquoThe modeland algorithm for the target threat assessment based on ElmanAdaBoost strong predictorrdquo Acta Electronica Sinica vol 40 no5 pp 901ndash906 2012
[6] X D Gu Z X Tong S J Chai XW Yuan and Y L Lu ldquoTargetthreat assessment based on TOPSIS combined by IAHP and themaximal deviationrdquo Journal of Air Force Engineering Universityvol 12 no 2 pp 27ndash31 2011
[7] K Sycara R Glinton B Yu et al ldquoAn integrated approach tohigh-level information fusionrdquo Information Fusion vol 10 no1 pp 25ndash50 2009
[8] A Kott R Singh W M McEneaney and W Milks ldquoHypo-thesis-driven information fusion in adversarial deceptive envi-ronmentsrdquo Information Fusion vol 12 no 2 pp 131ndash144 2011
[9] A Grossmann and J Morlet ldquoDecomposition of Hardy func-tions into square integrable wavelets of constant shaperdquo SIAM
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
4 The Scientific World Journal
the following six factors in general target type target speedtarget heading angle target height and target distance Wewill test the performance of MWFWNN using these factorsin this paper
31 Target Threat Assessment Factor Wemainly consider thefollowing six key factors when studying the target threatassessment in the paper
(1) Target Type large targets (such as fighter-bombers)small targets (such as stealth aircraft cruise missiles)and helicopters
(2) Target heading angle such as 22∘ 26∘ and 6∘(3) Target speed such as 100ms 500ms and 220ms(4) Target height such as very low low medium and
high(5) Target interference such as strong medium and
weak(6) Target distance such as 100 km 110 km and 220 km
32 Threat Assessment Model Using MWFWNN We designMWFWNN model according to the data characteristicsBecause the data is 6-dimensional and the output is 1-dimen-sional the structure of WNN is 6-12-1 Firstly we input sixindicators that are the target type target speed target headingangle target interference target height and the distance tothe input layer The hidden layer nodes are formed by thewavelet function and the output layer outputs predicted tar-get threat assessment value under the current indicators Onthe basis of the above analysis we construct the target threatassessment model based on MWFWNN with these sixselected indicators and its architecture is shown in Figure 2
4 Model Simulation
In this section we will test the target threat assessment modelusing MWFWNN proposed in Section 3
41 Data Preprocessing Part of the data used in our work isshown in Table 1 Target value is quantified using G A Mil-lerrsquos quantitative theory which represents the degree of threatextremely small very small little small small medium largelittle large very large and extremely large The properties ofthe specific quantitative criteria are quantified as follows
(1) Target Type helicopter large target (such as fighter-bombers) and small targets (such as stealth aircraftcruise missiles) are quantified by 3 5 and 8 respec-tively
(2) Target interference strongmediumweak and no arequantified by 8 6 4 and 2 respectively
(3) Target height very low low medium and high arequantified 8 6 4 and 2 respectively
(4) Target speed 0mssim1800ms equal interval(200ms) is quantified by 9 to 1
ℎ1
Height 1199095
Type 1199091
Speed 1199092ℎ2
Distance 1199096
Heading angle 1199093
Interference 1199094ℎ12
119910
2Hidden layer
3Output layer
1Input layer
Figure 2 Architecture for the model of target threat assessmentbased on MWFWNN
(5) Target heading angle 0∘sim36∘ equal interval (4∘) isquantified by 9 to 1
(6) Target distance 0 kmsim450 km equal interval (50 km)is quantified by 9 to 1
(7) Determining the target output firstly normalize thevarious factors from air combat situation and then putthem into the WMF WNN proposed in Section 32At last the WMF WNN outputs threatening assess-ment value
After quantifying the data we can normalize the trainingset and testing set using the following expression
119891 119909 997888rarr 119910 =119909 minus 119909min119909max minus 119909min
(11)
where 119909 119910 isin 119877119899 119909min = min(119909) 119909max = max(119909) The valuesare converted into the range [0 1] through the normalizationthat is 119909max isin [0 1] 119894 = 1 2 119899
42 Analysis of Simulation Results In this paper we imple-ment the MWFWNN algorithm by MATLAB R2009a withthe CPU Pentium (R) 4 306GHz 1 G memory (2 lowast 512M)The network prediction is compared with WNN PSO SVMand BP neural network The results show that the proposednetwork is superior to the WNN PSO SVM and BP neuralnetwork
421 Creating the Mother Wavelet Function Library Becausemany mother wavelet functions have no specific expressionwe cannot work out their derivativesTherefore in this workwe only use the following seven mother wavelet functions tobulid a librarywaveFuncitonTheir expressions are as follows
(1) Haar wavelet function
Ψ (119905) =
1 0 le 119905 le1
2
minus11
2le 119905 le 1
0 other
(12)
The Scientific World Journal 5
Table 1 Part of data
No Type Velocity (ms) Heading angle (∘) Inference Height Distance (km) Threat value1 Large 450 8 Medium Low 300 058432 Large 400 3 Strong High 100 057073 Large 450 16 Medium Low 200 053334 Large 800 4 Strong High 100 068955 Large 800 12 Strong Low 320 068966 Small 530 6 Strong Medium 230 060567 Small 650 8 Strong Medium 200 074258 Small 700 12 Strong Low 320 073369 Small 750 15 Medium Very low 400 0754110 Small 640 18 Strong Medium 280 0676411 Helicopter 90 12 Weak Very low 320 0393712 Helicopter 110 3 No Medium 100 0392713 Helicopter 100 9 No Medium 260 0335114 Helicopter 120 15 No Low 160 0358615 Helicopter 80 6 Weak High 180 03471
Table 2 Predicting results of different mother wavelet function
Wavelet function Haar Gaussian Morlet Mexihat Shannon Meyer GGWMSE 210 times 10minus2 103 times 1054 123 times 10minus3 127 times 10minus3 111 times 1055 390 times 1057 160 times 10minus2
Running time (s) 474 476 485 488 484 524 488
(2) Gaussian wavelet function
Ψ (119905) =119905
radic2120587
exp(minus1199052
2) (13)
(3) Morlet wavelet function
Ψ (119905) = cos (175119905) exp(minus1199052
2) (14)
(4) Mexican Hat (Mexihat) wavelet function
119888 =2
radic3
120587minus14
Ψ (119905) = 119888 (1 minus 1199052
) exp(minus1199052
2)
(15)
(5) Shannon wavelet function
Ψ (119905) =sin120587 (119905 minus 12) minus sin 2120587 (119905 minus 12)
120587 (119905 minus 12) (16)
(6) Meyer wavelet function (approximate formula)
Ψ (119905) = 351199054
minus 841199055
+ 701199056
minus 201199057
(17)
(7) Wavelet function GGW constructed by the authors
Ψ (119905) = sin (3119905) + sin (03119905) + sin (003119905) (18)
We use the following parameters to initialize the networkthe input layer nodes119872 = 6 the hidden layer nodes 119899 = 12the output nodes 119873 = 1 and parameter learning rates 119897119903
1
=
001 and 1198971199032
= 0001 respectivelyWe construct wavelet neural network using each wavelet
function in wavelet function library and then input trainingset into network and train the network The MSEs are as fol-lows (from small to large) Morlet ltMexihat lt GGW ltHaarlt Gaussian lt Shannon lt Meyer as shown in Table 2 FromTable 2 we can draw the conclusion that running time andMSE ofMorlet andMexihat are slightly different but theMSEof Gaussian Shannon and Meyer is extremely great com-pletely divorced from reality The results we got in this paperare consistent with the fact that most papers adopt Mexihator Morlet as mother wavelet functions to construct waveletneural network So we can use wavelet functions Morletor Mexihat as mother wavelet function to create MWFWNNnetwork In this paper we use the Morlet wavelet function asthe basic mother wavelet function to construct MWFWNN
422 Comparing with WNN and PSO SVM BP Similar toMWFWNNWNN BPnetwork and support vectormachine(SVM) can be used to solve the target assessment As thechoice of the support vector machine parameters 119888 and 119892 hasno uniform standard except relying on experience in thispaper we use particle swarm optimization (PSO) algorithmto optimize the SVM parameters 119888 and 119892 Next we usewavelet neural network BP neural network and PSO SVMto solve threat assessment and the results are compared withMWFWNN
The structure of wavelet neural network and BP neuralnetwork is 6-12-1 according to the characteristics of the data
6 The Scientific World Journal
Table 3 Predicting results of BP PSO SVM MWFWNN and WNN
Neural network BP PSO SVM MWFWNN WNNMSE 939 times 10minus3 480 times 10minus3 123 times 10minus3 401 times 10minus3
Running time (s) 186 106 488 510
1
04
05
06
07
08
09
0 5 10 15Sample
BP
WNN
MWFWNNActual value
BP PSO_SVM WNN and MWFWNN prediction results
PSO_SVM
Thre
atas
sess
men
tval
ue
Figure 3 Result of target threat assessment based on WNN BPMWFWNN and PSO SVM
used Where the WNN and other parameters are setting asshown in Section 421 PSO SVM uses LIBSVM toolbox [11]whose default is C-SVR and RBF kernel function where 119862 =1 120574 = 1119899 By default PSO local search 119888
1
= 15 the globalsearch 119888
2
= 17 themaximumevolution timesmaxgen = 200the maximum population size sizepop = 20 the rate 119896 = 06between 119881 and 119883 (119896 isin [01 10] 119881 = 119896119883) SVM parameter119888 isin [01 100] and the parameter 119892 isin [001 1000]
Threat assessment is predicted by the trained WNN BPnetwork and PSO SVM and their MSE are 401 times 10minus3 939 times10minus3and 480 times 10minus3 all of which are greater thanMWFWNN(123 times 10minus3) (as shown in Table 3) For the running time dueto BP neural network calling MATLAB embedded functionsrunning time is least wavelet neural networks need to calleach mother wavelet function and its derivative so it is timeto consume more MWFWNN network needs to find theoptimal wavelet function from the library so the runningtime ismore thanWNN the PSO SVMcall PSO algorithm tooptimize parameter 119888 and 119892 but PSO optimization algorithmis more complex so it implements most slowly
Prediction error of BP PSO SVM WNN andMWFWNN is shown in Figure 3 The figure shows that pre-diction errors of WNN BP and PSO SVM are simlilar Theerror is less than the true value at sample 1ndash10 while theerror was significantly greater than the true value at sample11ndash15 MWFWNN network error has similar trend but the
error was significantly less than the WNN BP network andPSO SVM prediction value closer to the expectations
5 Conclusion
Based on requirements for quickly processing information inthe modern information war aiming to the characteristics ofthreat assessment in data fusion functional model we adoptMWFWNN to solve the threat assessment under the com-prehensive consideration of various factors which influencethe threat degree After constructing wavelet function librarywith 7 wavelet functions we get the best performance waveletfunction Morlet as mother wavelet function to constructMWFWNN network and its result is compared with theWNN BP and PSO SVM networks Simulation results showthat the MSE of MWFWNN is 123 times 10minus3 which is farbetter than WNN (401 times 10minus3) BP network (939 times 10minus3)and PSO SVM (480 times 10minus3) achieving the desired goal Inour future work we will further expand the scale of waveletfunction library to find more suitable wavelet function tosolve the threat assessment and other problems
References
[1] F Johansson and G Falkman ldquoA Bayesian network approach tothreat evaluation with application to an air defense scenariordquo inProceedings of the 11th International Conference on InformationFusion (FUSION rsquo08) pp 1ndash7 July 2008
[2] J Yang W Y Gao and J Liu ldquoThreat assessment method basedon Bayesian networkrdquo Journal of PLA University of Science andTechnology vol 11 no 1 pp 43ndash48 2010
[3] J Li and L Han ldquoThreat ordering of interference targets basedon uncertain multi-attribute decision-makingrdquo Telecommuni-cation Engineering vol 49 no 10 pp 62ndash64 2009
[4] G Wang L Guo H Duan L Liu and H Wang ldquoTarget threatassessment using glowworm swarmoptimization andBPneuralnetworkrdquo Journal of Jilin University In press
[5] G Wang L Guo H Duan L Liu and H Wang ldquoThe modeland algorithm for the target threat assessment based on ElmanAdaBoost strong predictorrdquo Acta Electronica Sinica vol 40 no5 pp 901ndash906 2012
[6] X D Gu Z X Tong S J Chai XW Yuan and Y L Lu ldquoTargetthreat assessment based on TOPSIS combined by IAHP and themaximal deviationrdquo Journal of Air Force Engineering Universityvol 12 no 2 pp 27ndash31 2011
[7] K Sycara R Glinton B Yu et al ldquoAn integrated approach tohigh-level information fusionrdquo Information Fusion vol 10 no1 pp 25ndash50 2009
[8] A Kott R Singh W M McEneaney and W Milks ldquoHypo-thesis-driven information fusion in adversarial deceptive envi-ronmentsrdquo Information Fusion vol 12 no 2 pp 131ndash144 2011
[9] A Grossmann and J Morlet ldquoDecomposition of Hardy func-tions into square integrable wavelets of constant shaperdquo SIAM
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 5
Table 1 Part of data
No Type Velocity (ms) Heading angle (∘) Inference Height Distance (km) Threat value1 Large 450 8 Medium Low 300 058432 Large 400 3 Strong High 100 057073 Large 450 16 Medium Low 200 053334 Large 800 4 Strong High 100 068955 Large 800 12 Strong Low 320 068966 Small 530 6 Strong Medium 230 060567 Small 650 8 Strong Medium 200 074258 Small 700 12 Strong Low 320 073369 Small 750 15 Medium Very low 400 0754110 Small 640 18 Strong Medium 280 0676411 Helicopter 90 12 Weak Very low 320 0393712 Helicopter 110 3 No Medium 100 0392713 Helicopter 100 9 No Medium 260 0335114 Helicopter 120 15 No Low 160 0358615 Helicopter 80 6 Weak High 180 03471
Table 2 Predicting results of different mother wavelet function
Wavelet function Haar Gaussian Morlet Mexihat Shannon Meyer GGWMSE 210 times 10minus2 103 times 1054 123 times 10minus3 127 times 10minus3 111 times 1055 390 times 1057 160 times 10minus2
Running time (s) 474 476 485 488 484 524 488
(2) Gaussian wavelet function
Ψ (119905) =119905
radic2120587
exp(minus1199052
2) (13)
(3) Morlet wavelet function
Ψ (119905) = cos (175119905) exp(minus1199052
2) (14)
(4) Mexican Hat (Mexihat) wavelet function
119888 =2
radic3
120587minus14
Ψ (119905) = 119888 (1 minus 1199052
) exp(minus1199052
2)
(15)
(5) Shannon wavelet function
Ψ (119905) =sin120587 (119905 minus 12) minus sin 2120587 (119905 minus 12)
120587 (119905 minus 12) (16)
(6) Meyer wavelet function (approximate formula)
Ψ (119905) = 351199054
minus 841199055
+ 701199056
minus 201199057
(17)
(7) Wavelet function GGW constructed by the authors
Ψ (119905) = sin (3119905) + sin (03119905) + sin (003119905) (18)
We use the following parameters to initialize the networkthe input layer nodes119872 = 6 the hidden layer nodes 119899 = 12the output nodes 119873 = 1 and parameter learning rates 119897119903
1
=
001 and 1198971199032
= 0001 respectivelyWe construct wavelet neural network using each wavelet
function in wavelet function library and then input trainingset into network and train the network The MSEs are as fol-lows (from small to large) Morlet ltMexihat lt GGW ltHaarlt Gaussian lt Shannon lt Meyer as shown in Table 2 FromTable 2 we can draw the conclusion that running time andMSE ofMorlet andMexihat are slightly different but theMSEof Gaussian Shannon and Meyer is extremely great com-pletely divorced from reality The results we got in this paperare consistent with the fact that most papers adopt Mexihator Morlet as mother wavelet functions to construct waveletneural network So we can use wavelet functions Morletor Mexihat as mother wavelet function to create MWFWNNnetwork In this paper we use the Morlet wavelet function asthe basic mother wavelet function to construct MWFWNN
422 Comparing with WNN and PSO SVM BP Similar toMWFWNNWNN BPnetwork and support vectormachine(SVM) can be used to solve the target assessment As thechoice of the support vector machine parameters 119888 and 119892 hasno uniform standard except relying on experience in thispaper we use particle swarm optimization (PSO) algorithmto optimize the SVM parameters 119888 and 119892 Next we usewavelet neural network BP neural network and PSO SVMto solve threat assessment and the results are compared withMWFWNN
The structure of wavelet neural network and BP neuralnetwork is 6-12-1 according to the characteristics of the data
6 The Scientific World Journal
Table 3 Predicting results of BP PSO SVM MWFWNN and WNN
Neural network BP PSO SVM MWFWNN WNNMSE 939 times 10minus3 480 times 10minus3 123 times 10minus3 401 times 10minus3
Running time (s) 186 106 488 510
1
04
05
06
07
08
09
0 5 10 15Sample
BP
WNN
MWFWNNActual value
BP PSO_SVM WNN and MWFWNN prediction results
PSO_SVM
Thre
atas
sess
men
tval
ue
Figure 3 Result of target threat assessment based on WNN BPMWFWNN and PSO SVM
used Where the WNN and other parameters are setting asshown in Section 421 PSO SVM uses LIBSVM toolbox [11]whose default is C-SVR and RBF kernel function where 119862 =1 120574 = 1119899 By default PSO local search 119888
1
= 15 the globalsearch 119888
2
= 17 themaximumevolution timesmaxgen = 200the maximum population size sizepop = 20 the rate 119896 = 06between 119881 and 119883 (119896 isin [01 10] 119881 = 119896119883) SVM parameter119888 isin [01 100] and the parameter 119892 isin [001 1000]
Threat assessment is predicted by the trained WNN BPnetwork and PSO SVM and their MSE are 401 times 10minus3 939 times10minus3and 480 times 10minus3 all of which are greater thanMWFWNN(123 times 10minus3) (as shown in Table 3) For the running time dueto BP neural network calling MATLAB embedded functionsrunning time is least wavelet neural networks need to calleach mother wavelet function and its derivative so it is timeto consume more MWFWNN network needs to find theoptimal wavelet function from the library so the runningtime ismore thanWNN the PSO SVMcall PSO algorithm tooptimize parameter 119888 and 119892 but PSO optimization algorithmis more complex so it implements most slowly
Prediction error of BP PSO SVM WNN andMWFWNN is shown in Figure 3 The figure shows that pre-diction errors of WNN BP and PSO SVM are simlilar Theerror is less than the true value at sample 1ndash10 while theerror was significantly greater than the true value at sample11ndash15 MWFWNN network error has similar trend but the
error was significantly less than the WNN BP network andPSO SVM prediction value closer to the expectations
5 Conclusion
Based on requirements for quickly processing information inthe modern information war aiming to the characteristics ofthreat assessment in data fusion functional model we adoptMWFWNN to solve the threat assessment under the com-prehensive consideration of various factors which influencethe threat degree After constructing wavelet function librarywith 7 wavelet functions we get the best performance waveletfunction Morlet as mother wavelet function to constructMWFWNN network and its result is compared with theWNN BP and PSO SVM networks Simulation results showthat the MSE of MWFWNN is 123 times 10minus3 which is farbetter than WNN (401 times 10minus3) BP network (939 times 10minus3)and PSO SVM (480 times 10minus3) achieving the desired goal Inour future work we will further expand the scale of waveletfunction library to find more suitable wavelet function tosolve the threat assessment and other problems
References
[1] F Johansson and G Falkman ldquoA Bayesian network approach tothreat evaluation with application to an air defense scenariordquo inProceedings of the 11th International Conference on InformationFusion (FUSION rsquo08) pp 1ndash7 July 2008
[2] J Yang W Y Gao and J Liu ldquoThreat assessment method basedon Bayesian networkrdquo Journal of PLA University of Science andTechnology vol 11 no 1 pp 43ndash48 2010
[3] J Li and L Han ldquoThreat ordering of interference targets basedon uncertain multi-attribute decision-makingrdquo Telecommuni-cation Engineering vol 49 no 10 pp 62ndash64 2009
[4] G Wang L Guo H Duan L Liu and H Wang ldquoTarget threatassessment using glowworm swarmoptimization andBPneuralnetworkrdquo Journal of Jilin University In press
[5] G Wang L Guo H Duan L Liu and H Wang ldquoThe modeland algorithm for the target threat assessment based on ElmanAdaBoost strong predictorrdquo Acta Electronica Sinica vol 40 no5 pp 901ndash906 2012
[6] X D Gu Z X Tong S J Chai XW Yuan and Y L Lu ldquoTargetthreat assessment based on TOPSIS combined by IAHP and themaximal deviationrdquo Journal of Air Force Engineering Universityvol 12 no 2 pp 27ndash31 2011
[7] K Sycara R Glinton B Yu et al ldquoAn integrated approach tohigh-level information fusionrdquo Information Fusion vol 10 no1 pp 25ndash50 2009
[8] A Kott R Singh W M McEneaney and W Milks ldquoHypo-thesis-driven information fusion in adversarial deceptive envi-ronmentsrdquo Information Fusion vol 12 no 2 pp 131ndash144 2011
[9] A Grossmann and J Morlet ldquoDecomposition of Hardy func-tions into square integrable wavelets of constant shaperdquo SIAM
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
6 The Scientific World Journal
Table 3 Predicting results of BP PSO SVM MWFWNN and WNN
Neural network BP PSO SVM MWFWNN WNNMSE 939 times 10minus3 480 times 10minus3 123 times 10minus3 401 times 10minus3
Running time (s) 186 106 488 510
1
04
05
06
07
08
09
0 5 10 15Sample
BP
WNN
MWFWNNActual value
BP PSO_SVM WNN and MWFWNN prediction results
PSO_SVM
Thre
atas
sess
men
tval
ue
Figure 3 Result of target threat assessment based on WNN BPMWFWNN and PSO SVM
used Where the WNN and other parameters are setting asshown in Section 421 PSO SVM uses LIBSVM toolbox [11]whose default is C-SVR and RBF kernel function where 119862 =1 120574 = 1119899 By default PSO local search 119888
1
= 15 the globalsearch 119888
2
= 17 themaximumevolution timesmaxgen = 200the maximum population size sizepop = 20 the rate 119896 = 06between 119881 and 119883 (119896 isin [01 10] 119881 = 119896119883) SVM parameter119888 isin [01 100] and the parameter 119892 isin [001 1000]
Threat assessment is predicted by the trained WNN BPnetwork and PSO SVM and their MSE are 401 times 10minus3 939 times10minus3and 480 times 10minus3 all of which are greater thanMWFWNN(123 times 10minus3) (as shown in Table 3) For the running time dueto BP neural network calling MATLAB embedded functionsrunning time is least wavelet neural networks need to calleach mother wavelet function and its derivative so it is timeto consume more MWFWNN network needs to find theoptimal wavelet function from the library so the runningtime ismore thanWNN the PSO SVMcall PSO algorithm tooptimize parameter 119888 and 119892 but PSO optimization algorithmis more complex so it implements most slowly
Prediction error of BP PSO SVM WNN andMWFWNN is shown in Figure 3 The figure shows that pre-diction errors of WNN BP and PSO SVM are simlilar Theerror is less than the true value at sample 1ndash10 while theerror was significantly greater than the true value at sample11ndash15 MWFWNN network error has similar trend but the
error was significantly less than the WNN BP network andPSO SVM prediction value closer to the expectations
5 Conclusion
Based on requirements for quickly processing information inthe modern information war aiming to the characteristics ofthreat assessment in data fusion functional model we adoptMWFWNN to solve the threat assessment under the com-prehensive consideration of various factors which influencethe threat degree After constructing wavelet function librarywith 7 wavelet functions we get the best performance waveletfunction Morlet as mother wavelet function to constructMWFWNN network and its result is compared with theWNN BP and PSO SVM networks Simulation results showthat the MSE of MWFWNN is 123 times 10minus3 which is farbetter than WNN (401 times 10minus3) BP network (939 times 10minus3)and PSO SVM (480 times 10minus3) achieving the desired goal Inour future work we will further expand the scale of waveletfunction library to find more suitable wavelet function tosolve the threat assessment and other problems
References
[1] F Johansson and G Falkman ldquoA Bayesian network approach tothreat evaluation with application to an air defense scenariordquo inProceedings of the 11th International Conference on InformationFusion (FUSION rsquo08) pp 1ndash7 July 2008
[2] J Yang W Y Gao and J Liu ldquoThreat assessment method basedon Bayesian networkrdquo Journal of PLA University of Science andTechnology vol 11 no 1 pp 43ndash48 2010
[3] J Li and L Han ldquoThreat ordering of interference targets basedon uncertain multi-attribute decision-makingrdquo Telecommuni-cation Engineering vol 49 no 10 pp 62ndash64 2009
[4] G Wang L Guo H Duan L Liu and H Wang ldquoTarget threatassessment using glowworm swarmoptimization andBPneuralnetworkrdquo Journal of Jilin University In press
[5] G Wang L Guo H Duan L Liu and H Wang ldquoThe modeland algorithm for the target threat assessment based on ElmanAdaBoost strong predictorrdquo Acta Electronica Sinica vol 40 no5 pp 901ndash906 2012
[6] X D Gu Z X Tong S J Chai XW Yuan and Y L Lu ldquoTargetthreat assessment based on TOPSIS combined by IAHP and themaximal deviationrdquo Journal of Air Force Engineering Universityvol 12 no 2 pp 27ndash31 2011
[7] K Sycara R Glinton B Yu et al ldquoAn integrated approach tohigh-level information fusionrdquo Information Fusion vol 10 no1 pp 25ndash50 2009
[8] A Kott R Singh W M McEneaney and W Milks ldquoHypo-thesis-driven information fusion in adversarial deceptive envi-ronmentsrdquo Information Fusion vol 12 no 2 pp 131ndash144 2011
[9] A Grossmann and J Morlet ldquoDecomposition of Hardy func-tions into square integrable wavelets of constant shaperdquo SIAM
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 7
Journal on Mathematical Analysis vol 15 no 4 pp 723ndash7361984
[10] Q Zhang and A Benveniste ldquoWavelet networksrdquo IEEE Trans-actions on Neural Networks vol 3 no 6 pp 889ndash898 1992
[11] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo ACM Transactions on Intelligent Systems and Tech-nology (TIST) vol 2 no 3 p 27 2011
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014