anewann-particleswarmoptimizationwithcenterofgravity (ann … · 2021. 4. 30. ·...

17
Research Article A New ANN-Particle Swarm Optimization with Center of Gravity (ANN-PSOCoG)PredictionModelfortheStockMarketunderthe Effect of COVID-19 Razan Jamous, 1 Hosam ALRahhal , 1,2 and Mohamed El-Darieby 1 1 Faculty of Engineering and Applied Science, University of Regina, Regin, SK S4S 0A2, Canada 2 Faculty of Engineering, Nahda University, Beni Suef 62764, Egypt Correspondence should be addressed to Hosam ALRahhal; [email protected] Received 22 October 2020; Revised 12 November 2020; Accepted 12 April 2021; Published 30 April 2021 Academic Editor: Ligang He Copyright©2021RazanJamousetal.isisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. SincethedeclarationofCOVID-19asapandemic,theworldstockmarketshavesufferedhugelossespromptinginvestorstolimit or avoid these losses. e stock market was one of the businesses that were affected the most. At the same time, artificial neural networks(ANNs)havealreadybeenusedforthepredictionoftheclosingpricesinstockmarkets.However,standaloneANNhas several limitations, resulting in the lower accuracy of the prediction results. Such limitation is resolved using hybrid models. erefore, a combination of artificial intelligence networks and particle swarm optimization for efficient stock market prediction was reported in the literature. is method predicted the closing prices of the shares traded on the stock market, allowing for the largestprofitwiththeminimumrisk.Nevertheless,theresultswerenotthatsatisfactory.Inordertoachievepredictionwithahigh degreeofaccuracyinashorttime,anewimprovedmethodcalledPSOCoGhasbeenproposedinthispaper.Todesigntheneural network to minimize processing time and search time and maximize the accuracy of prediction, it is necessary to identify hyperparameter values with precision. PSOCoG has been employed to select the best hyperparameters in order to construct the bestneuralnetwork.ecreatednetworkwasabletopredicttheclosingpricewithhighaccuracy,andtheproposedmodelANN- PSOCoG showed that it could predict closing price values with an infinitesimal error, outperforming existing models in terms of errorratioandprocessingtime.UsingS&P500dataset,ANN-PSOCoGoutperformedANN-SPSOintermsofpredictionaccuracy by approximately 13%, SPSOCOG by approximately 17%, SPSO by approximately 20%, and ANN by approximately 25%. While using DJIA dataset, ANN-PSOCoG outperformed ANN-SPSO in terms of prediction accuracy by approximately 18%, SPSOCOG by approximately 24%, SPSO by approximately 33%, and ANN by approximately 42%. Besides, the proposed model is evaluated undertheeffectofCOVID-19.eresultsprovedtheabilityoftheproposedmodeltopredicttheclosingpricewithhighaccuracy where the values of MAPE, MAE, and RE were very small for S&P 500, GOLD, NASDAQ-100, and CANUSD datasets. 1. Introduction At the beginning of 2020, the world witnessed a large spread oftheCOVID-19virus,whichwasdiscoveredintheChinese cityofWuhanandwasclassifiedasapandemicbytheWorld Health Organization on March 11, 2020 [1]. Due to the spread of the COVID-19 pandemic, un- precedented measures have been taken to slow down this virus’s spread. Hundreds of millions of people were asked to stayinsidetheirhomes.Air,sea,andlandtransportstopped, and many factories were suspended. As a result, stock markets were among the worst affected by the COVID-19 pandemic,whichledtoalargeeconomiccrisis,wheretheUS Gross Domestic Product of 2020 was affected by significant short-termhitby1.5%[2],whileinChina,itwasbetween2% and 3% [3]. e COVID-19 pandemic affects the stock markets through feelings of fear and panic affecting in- vestorsandresultingfromthesuspensionofmanyeconomic sectors such as industry, transport, and tourism, as well as the lack of hope in ending the pandemic in the close future [4]. e stock exchange is a preferred choice for investors because it brings them high profits. However, forecasting in Hindawi Scientific Programming Volume 2021, Article ID 6656150, 17 pages https://doi.org/10.1155/2021/6656150

Upload: others

Post on 16-Aug-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

Research ArticleA New ANN-Particle Swarm Optimization with Center of Gravity(ANN-PSOCoG) PredictionModel for the StockMarket under theEffect of COVID-19

Razan Jamous1 Hosam ALRahhal 12 and Mohamed El-Darieby1

1Faculty of Engineering and Applied Science University of Regina Regin SK S4S 0A2 Canada2Faculty of Engineering Nahda University Beni Suef 62764 Egypt

Correspondence should be addressed to Hosam ALRahhal hosamrahhalgmailcom

Received 22 October 2020 Revised 12 November 2020 Accepted 12 April 2021 Published 30 April 2021

Academic Editor Ligang He

Copyright copy 2021 Razan Jamous et alampis is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Since the declaration of COVID-19 as a pandemic the world stock markets have suffered huge losses prompting investors to limitor avoid these losses ampe stock market was one of the businesses that were affected the most At the same time artificial neuralnetworks (ANNs) have already been used for the prediction of the closing prices in stock markets However standalone ANN hasseveral limitations resulting in the lower accuracy of the prediction results Such limitation is resolved using hybrid modelsamperefore a combination of artificial intelligence networks and particle swarm optimization for efficient stock market predictionwas reported in the literature ampis method predicted the closing prices of the shares traded on the stock market allowing for thelargest profit with the minimum risk Nevertheless the results were not that satisfactory In order to achieve prediction with a highdegree of accuracy in a short time a new improved method called PSOCoG has been proposed in this paper To design the neuralnetwork to minimize processing time and search time and maximize the accuracy of prediction it is necessary to identifyhyperparameter values with precision PSOCoG has been employed to select the best hyperparameters in order to construct thebest neural network ampe created network was able to predict the closing price with high accuracy and the proposed model ANN-PSOCoG showed that it could predict closing price values with an infinitesimal error outperforming existing models in terms oferror ratio and processing time Using SampP 500 dataset ANN-PSOCoG outperformed ANN-SPSO in terms of prediction accuracyby approximately 13 SPSOCOG by approximately 17 SPSO by approximately 20 and ANN by approximately 25 Whileusing DJIA dataset ANN-PSOCoG outperformed ANN-SPSO in terms of prediction accuracy by approximately 18 SPSOCOGby approximately 24 SPSO by approximately 33 and ANN by approximately 42 Besides the proposed model is evaluatedunder the effect of COVID-19ampe results proved the ability of the proposed model to predict the closing price with high accuracywhere the values of MAPE MAE and RE were very small for SampP 500 GOLD NASDAQ-100 and CANUSD datasets

1 Introduction

At the beginning of 2020 the world witnessed a large spreadof the COVID-19 virus which was discovered in the Chinesecity ofWuhan and was classified as a pandemic by theWorldHealth Organization on March 11 2020 [1]

Due to the spread of the COVID-19 pandemic un-precedented measures have been taken to slow down thisvirusrsquos spread Hundreds of millions of people were asked tostay inside their homes Air sea and land transport stoppedand many factories were suspended As a result stock

markets were among the worst affected by the COVID-19pandemic which led to a large economic crisis where the USGross Domestic Product of 2020 was affected by significantshort-term hit by 15 [2] while in China it was between 2and 3 [3] ampe COVID-19 pandemic affects the stockmarkets through feelings of fear and panic affecting in-vestors and resulting from the suspension of many economicsectors such as industry transport and tourism as well asthe lack of hope in ending the pandemic in the close future[4] ampe stock exchange is a preferred choice for investorsbecause it brings them high profits However forecasting in

HindawiScientific ProgrammingVolume 2021 Article ID 6656150 17 pageshttpsdoiorg10115520216656150

the stock exchange is a complex process because stock pricesdepend on several factors and are greatly affected by rumors[5] ampe stock markets attracted researchersrsquo attention toanalyze and predict future values to guide the investors tomake correct buying or selling decisions and achieve the bestprofit with the minimum risk

Due to the significant need for price prediction in thestock market many methods were presented to achieve thisaimampere is also a growing interest to employ AI techniquesto forecast the prices in the stock market One of the mostpreferred and common in use methods is ANN that can beused purely or combined with other statistical methods [6]In some cases of complex nonlinear systems like stockmarkets ANN fails to learn the actual processes associatedwith the corresponding target variable Also the ANNmodel is found to be efficient in some cases and the sameANN model may be inefficient in other cases

Many researchers have been studying stock price pre-diction for years some of these studies have improved theexisting models and some have further processed the dataampese studies are not perfect some of the models are toocomplex and some of the processing procedures are time-consuming ampese shortcomings will increase the methodsrsquoinstability and limit the application and extension of theresearch results

ampe traditional stock forecasting methods could not fitand analyze the highly nonlinear and multifactor stockmarket well so there are low prediction accuracy and slowtraining speed problems [7] amperefore recent studies suchas fusion in stock market prediction [6] financial tradingstrategy system [8] and short-term stock market price trendpredictions [9] try to find solutions to problems Some ofthese problems are (1) inaccurate prediction and (2) taking along time as a result of incorrectly selecting the hyper-parameters values of the neural network When using ar-tificial neural networks in forecasting the selection ofhyperparameter values is crucial to construct the artificialneural networkrsquos best topology to achieve high accuracyprediction of prices with the minimum possible error

Hyperparameters are defined as the parameters thatshould be tuned before the beginning of the ANN trainingprocess ampe hyperparameters for ANN technique includefor example the number of hidden layers (NHL) number ofneurons per hidden layer (NNPHL) activation function(ACTFUN) type and learning rate (LR) ampese hyper-parameters have the highest priority in designing an ANNand have a massive impact on ANNrsquos performance Con-sequently multilayers ANN (MLANN) term brings first tomind multilayered and multinoded complex architecturesLearning of ANN is also thought to have taken its powerfrom this complexity NHL and NNPHL are variables whichdetermine the network structure ampey are the most mys-terious hyperparameters that increase the number of pa-rameters to be trained with Since NHL and NNPHL are thequantities that affect the number of parameters to be trained(weight bias) the most optimal model ie ldquothe mostcomplex model with least parametersrdquo must be establishedin both time and learning [10 11] Increasing or decreasingthe number of hidden layers or the number of hidden nodes

might improve the accuracy or might not ampis depends onthe complexity of the problem where inappropriate selec-tion of these hyperparameters causes underfitting of over-fitting problems [12] For example ANNs are universalfunction approximators and for them to learn to approxi-mate a function or to predict a task they need to haveenough lsquocapacityrsquo to learn the function

NNPHL is the main measure of the learning modelcapacity For a simple function it might need a fewernumber of hidden unitsampemore complex the function themore learning capacity the model will need Increasing thenumber of units by small number is not a problem but amuch larger number will lead to overfitting for instanceproviding a model with too much capacity it might tend tooverfitting problem trying to ldquomemorizerdquo the dataset af-fecting the capacity to generalize [13] As seen it is veryimportant to select the best values of these hyperparameters

In neural networks (NNs) activation functions are usedto transfer the values from one layer to another ampe mo-tivation for the activation functions defined by multilayerneural networks is to compose simple transformations toobtain highly nonlinear ones In particular MLPs composeaffine transformations and elementwise nonlinearities Withthe appropriate choice of parameters multilayer neuralnetworks can in principle approximate any smooth func-tion with more hidden units allowing one to achieve betterapproximations Also activation functions are applied afterevery layer and need to be calculated millions of times insome cases Hence they are computationally inexpensive tocalculate [12] An inappropriate selection of the activationfunction can lead to the loss of information of the inputduring forwarding propagation and the exponential van-ishingexploding of gradients during backpropagation or itcan lead to high computational expenses [14]

Learning rate is one of the most important hyper-parameters to tune ANN to achieve better performanceLearning rate determines the step size at each training it-eration while moving towards an optimum of a loss func-tion ampe amount that the weights and bias parameters areupdated is known as the learning rate If the learning rate(LR) model is way smaller than the optimal value it leads themodel to learn slowly up to hundreds or thousands ofepochs to reach an ideal state LR that is too small may getstuck in an undesirable local minimum and overfitting mayoccur On the other hand if the learning rate is much largerthan the optimal value it leads the model to learn muchfaster but it would overshoot the ideal state and the al-gorithm might not converge ampis might cause undesirabledivergent behavior in loss function [15]

ampe determination of the best hyperparameters dependson selecting suitable parameters that have to be tuned ampeart of the networkrsquos hyperparameters optimization amountsto ending up at the balance point between underfitting andoverfitting After implementing the optimal value for eachhyperparameter in an iterative way the optimal hyper-parameters will build the best architecture for ANN ampisnew model can improve the accuracy of future forecasteddata and reduce the prediction processrsquos consuming timeHowever this procedure remains a challenging task

2 Scientific Programming

Choosing inappropriate values for these variables leadsto building an inappropriate network structure and givesinaccurate results that require many iterations to improveperformance which consumes a lot of time ampese shortageslead to the need for hybridization ampe hybrid model inte-grates two or more ANN and other models to overcomecertain standalone ANN modelsrsquo limitations and henceincrease prediction accuracy and reduce time ampe hybridmodel is used for modeling complex target variables thatinvolve different processes Generally the hybrid modelconsists of a decision-making integrated with a learningmodel ampe decision-making model helps ANN to select thebest hyperparameters values For that various techniquescan be used as decision-making models such as particleswarm optimization (PSO) [16] backpropagation [17] ge-netic algorithms [18] ant colony optimization [19] beeswarm optimization (BSO) [20] Tabu search [21] and fuzzysystems [22]

As stated before ANNs have already been employed forthe prediction of closing price in stock markets Butstandalone ANN has several limitations resulting in thelower accuracy of the prediction results However thelimitation of ANN is resolved using hybrid models

In this paper using a physical concept called the centerof gravity (CoG) an enhanced PSO algorithm called PSOwith the center of gravity (PSOCoG) is proposed PSOCoGwas used to select the best values of ANNrsquos hyperparametersthe resulting model is called ANN-PSOCoG ampe proposedANN-PSOCoG model was used to forecast the closing pricein the stock market ampe effect of COVID-19 on ANN-PSOCoG was also studied In the proposed ANN-PSOCoGhybrid model the new enhanced algorithm (PSOCoG) isused as a decision-making model and ANN is used as alearning model

ampe essential contributions of this paper can be sum-marized as follows

(1) We design a generic model that can be used to selectthe best values of hyperparameters for an ANN

(2) We find the best configuration for ANN in terms ofNHL NNPHL ACTFUN and LR using the pro-posed PSOCoG for ANN

(3) ampe proposed model can be utilized to estimate theclosing price using different historical data of stockmarket indices with high accuracy and minimumerror

(4) We compare the proposed model with traditionalANN the standard PSO (SPSO) and ANN withstandard PSO models

(5) We study the efficiency of the proposed model underthe effect of COVID-19

ampe rest of the paper is organized as follows Section 2introduces the background of standard particle swarm opti-mization and the stock market A literature review is sum-marized in Section 3ampe description of the proposed techniqueis given in Section 4ampe performance evaluation is presented inSection 5 followed by a conclusion and futurework in Section 6

2 Background

For the paper to be self-content throughout the followingsubsections a brief review on ANN SPSO and stock marketis introduced

21 Artificial Neural Networks (ANNs) A classical ANNconsists of three related layers the input layer the hiddenlayer and the output layer ampe number of units in the inputand output layers depends on the input and output datarsquossize While the input units receive original data from theoutside world and provide it to the ANN no processing isexecuted in any of the input units Instead these units passon information to the hidden units ampe hidden nodesperform data processing and transfer information from theinput units to the output units ampe information is thenprocessed and transmitted by output units from the ANN tothe outside world

Figure 1 shows a model for artificial neuron In thismodel there are various inputs to the neural networkx0 x1 x2 xn which represent one single observationLet Wij (W1j W2j Wnj) be a weight vector of thejthunit of the hidden layer and b be a bias value that permitsmoving the transfer function up or down the output iscreated through an activation function by the summation ofmultiplication of each input with the related weight vectorMathematically this can be expressed in the followingequation

x1W1j + x1W2j + x1W3j + middot middot middot + x1Wnj 1113944 xiWij

(1)

After the output is produced an activation function(φ1113936 xiWij) is used to transmit the information to theoutside worldampe activation functions are used to get theoutput of the node It is used to determine the output ofANN such as yes or no ampe activation functions map theresulting values to a range of values between 0 and 1 or -1to 1 and so on depending upon the type of functionused

ampe activation functions can be basically divided intolinear activation function and nonlinear activation func-tions ampere are many types of activation functions such aslog-sigmoid (logsig) softmaxrsquo tan-sigmoid (tansig) and thelinear activation function (purelin) ampe most commonlyused activation functions for multilayer networks are log-sigmoid (logsig) softmaxrsquo tan-sigmoid (tansig) and thelinear activation function (purelin) [23]

ANN systems work in a different way where the inputdata are combined to predict the output data ampese systemsare optimized in a way to allow for the computation oferrors To compute errors in these systems the expectedANNrsquos outputs are compared to the real targets ampesesystems have iteration ability where errors can be computedand estimated again leading to decrease in errors until thesmallest possible error is found

Scientific Programming 3

22 Standard Particle Swarm Optimization (SPSO) SPSOwas presented by Kennedy and Eberhart [24] SPSO mimicsthe simple behavior of organisms and the local cooperationwith the environment and neighborrsquos organisms to developbehaviors used for solving complex problems such as opti-mization issues PSO algorithm has many advantages com-pared to different swarm intelligence (SI) techniques Forinstance its search procedure is simple effective and easy toimplement It can effectively find the best global solutionswith high accuracy PSO is a population-based search pro-cedure in which each individual procedure represents aparticle ie a possible solution ampese particles are groupedinto a swarmampe particlesmoving within amultidimensionalspace adapt their locations depending on their experience andneighbors ampe principle of the PSO technique can beexplained [25] as follows Let Xi(t) (x

(t)i1 x

(t)i2 x

(t)i d)

denote the position particlei and Vi(t) (v(t)i1 v

(t)i2 v

(t)i d)

denote the velocity of a particlei in the searching area at atime-step t Also let Pi (pi1 pi2 pid) and Pg (pg1 pg2 pgd) indicate the preferable choice established by theparticle itself and by the swarm respectivelyampe new locationof the particle is updated as follows

V(t+1)id wV

(t)id + c1r1 Pid minus X

(t)id1113872 1113873

+ c2r2 Pgd minus X(t)id1113872 1113873 i 1 2 N

(2)

X(t+1)id X

(t)id + V

(t+1)id i 1 2 N (3)

where the particlemoves in amultidimensional space c1 and c2are positive constantsr1 and r2 are random numbers in therange [0 1] and w is the inertia weight Vi(t) controls theimprovement process and represents both the personal ex-perience of the particle and swapped knowledge from the othersurrounded particles ampe personal experience of a particle isusually indicated as a cognitive term and it represents theapproaching from the best local position ampe swappedknowledge is indicated as a social term in (2) which representsthe approaching from the best global position for the swarm

23 Stock Market ampe stock market is an exchange wheresecurities and shares are traded at a price organized by therequest and supply [26] According to [27] the stock market

is defined by investorsrsquo reaction which is made by the in-formation that is associated with the ldquoreal valuerdquo of com-panies ampe stock market is one of the most important waysof building wealth Stocks are the cornerstone of any in-vestment portfolio

ampe most significant stock market data called technicaldata including the closing price the high price of a tradingday the low price of a trading day and the volume of thestock market shares traded per day [28] A stock index is ameasure of a stock market that enables investors to comparebetween the new and old prices to know the market per-formance [29] ampere are many common indices such asStandard amp Poorrsquos 500 (SampP 500) National Association ofSecurities Dealers Automated Quotations 100 (NASDAQ-100) Dow Jones Industrial Average (DJIA) Gold andCanadian Dollar to United State Dollar (CADUSD) usedwith the stock market [30] ampe historical data of each ofthese indices including the technical data can be used as adataset to train the ANN

3 Related Work

A survey of the relevant work shows that there are manystudies about forecasting the stock marketrsquos future valuesampe recent technological advances that support AI have ledto the appearance of a new trend of investing In this regardthe buy and sell decisions can be made by treating big dataamounts of historical data for stock market indices anddetermining the risks related to them with high accuracyMany models fall within this trend such as (1) a dynamictrading rule based on filtered flag pattern recognition forstock market price forecasting [31] (2) stock market tradingrule based on pattern recognition and technical analysis [32](3) intelligent pattern recognition model for supportinginvestment decisions in stock market [33] (4) using supportvector machine with a hybrid feature selectionmethod to thestock trend prediction [34] (5) forecasting stock markettrend using machine learning algorithms [35] and (6) deeplearning for stock market prediction [36] For example theauthor in [37] was the first to attempt to use ANN in order tomodel the economic time series of IBM company ampismethod tries to understand the nonlinear regularity forchanges in asset prices like day-to-day variations Howeverthe scope of the work was limited as the author used afeedforward neural network with only one hidden layer withfive hidden units in it without considering the ability to usedvariant numbers of hidden layers or hidden nodes Also theauthor did not study the effect of the activation function orthe learning rate

In another study [38] the authors proposed a newforecasting method to predict Bitcoinrsquos future price In thismethod PSO was used to select the best values for thenumber of hidden units the input lag space and output lagspace of the Nonlinear Autoregressive with ExogeneousInputs model ampe results showed the ability of the model topredict Bitcoin prices accurately It is worth noting that theauthors of [38] did not discuss the impact of the NHL theACTFUN and the learning rate on the performance of thealgorithm However the proposed technique considered

b

W0

W1 fOutput

activationfunction

W2

X0

X1

X2

Wn

b + X0W0 + X1W1 + +XnWn

Synapticweights

Xn

Input

1 Bias

sum

Figure 1 Artificial neuron model

4 Scientific Programming

NHL and used different values for NHL that allows selectingthe best hyperparameters of ANN ampe proposed model alsostudied the impact of the activation function and the LR

A flexible neural tree (FNT) ensemble technique isproposed in [39] ampe authors in [39] developed a trustedtechnique to model the attitude of the seemingly chaoticstock markets ampey used a tree structure based evolutionaryalgorithm and PSO algorithms to optimize the structure andparameters of FNT Nasdaq-100 index and the SampP CNXNIFTY stock index are used to evaluate their techniqueAuthors of [39] did not consider the impact of the depth ofthe neural tree that is NHL ampey also did not study theimpact of other hyperparameters such as ACTFUN the LRand the number of children at each level in other words thenumber of hidden nodes

In [40] the author presented a method based on Mul-tilayer Perceptron and Long Short-Term Memory Networksto predict stock market indices ampe historical data of theBombay Stock Exchange (BSE) Sensex from the Indian Stockmarket were used to evaluate the method In [40] the se-lected NN architecture was verified manually not auto-matically during a trial-and-error process ampe author alsoused only one hidden layer and changed only the number ofhidden units per hidden layer without considering the ac-tivation function and the learning rate However the pro-posed technique selected a suitable structure from allpossible structures that increase the probability to select theoptimal structure and used multiple hidden layers andmultiple nodes per hidden layer ampe proposed techniquechanged NHL and NNPHL to adjust selecting the optimalstructure and considered the effect of ACTFUN and the LR

In another work the authors of [41] suggested an en-hanced PSO to train the Sigmoid Diagonal Recurrent NeuralNetworks (SDRNNs) to optimize the parameters of SDRNNampe historical date of NASDAQ-100 and SampP 500 stockmarket indices was used to evaluate their modelampe authorsin [41] did not discuss selection of the best structure of thenetwork in terms of NHL NNPHL ACTFUN and LR

ampe authors in [42] presented a new stock marketprediction method and applied the firefly algorithm with anevolutionary method ampey implemented their method tothe Online Sequential Extreme Learning Machine (OSELM)ampeir modelrsquos performance was evaluated using the datasetsof BSE Sensex NSE Sensex SampP 500 and FTSE indices ampeauthor of [42] did not optimize the hyperparameters of thenetwork

In [43] the authors suggested a new end-to-end hybridneural network approach to forecast the stock marketindexrsquos price direction ampis approach depends on learningmultiple time scale features extracted from the daily priceand CNN network to perform the prediction using com-pletely linked layers that bring together features learned bythe Long Short-Term Memory network ampe historical dataof the SampP 500 index was used to evaluate this approachampeauthor of [43] did not study the effect of optimization of thehyperparameters on the networkrsquos performance

ampe authors of [44] proposed a hybrid model to forecastthe Nikkei 225 index price for the Tokyo stock market ampehybrid model used ANN and genetic algorithms to optimize

the accuracy of stock price prediction ANN was trainedusing a backpropagation as a learning algorithm In [44] theauthors used ANN with only one hidden layer and did notconsider the effect of hyperparameters of ANN such as NHLNNPHL ACTFUN and LR while in [45] the authorsproposed an adaptive system to predict the price in the stockmarket ampis system uses the PSO algorithm to overcome theproblems of the backpropagation approach and to trainingANN to forecast the price of the SampP 500 Index and theNASDAQ Composite Index therefore helping the investorsto make correct trading decisions ampey also used ANN withonly one hidden layer with a fixed NNPHL equal to 2N-1where N is the number of inputs ampey did not consider theeffect of other hyperparameters such as ACTFUN and LR

In [46] the authors studied the different modificationson PSO and its application over the stock market to predictthe prices In another work [47] the PSO algorithm selectsthe optimally weighted signals set of trading signals thatspecify buy order sell order or hold order depending onevolutional learning from the New York Stock Exchange(NYSE) and the Stock Exchange of ampailand (SET)

In [48] the authors proposed an effective predictionmodel using PSO to forecast the SampP 500 and DJIA StockIndices for the short and long term ampis model applies anadaptive linear combiner (ALC) whereas PSO modifies itsweights ampe authors compared their model with the MLP-based model ampe results show that their model better thanthe MLP-based model in terms of accuracy and trainingtime

4 The Proposed Technique

ampe neural network parametersrsquo determination to create anappropriate architecture of the neural network that producesoutput with accepted error is very time- and cost-consumingTo determine the architecture of the network the configu-ration is generally carried out by hand in a trial-and-errorfashion However the manual tuning of the hyperparametersof ANN through the trial-and-error process and findingaccurate configurations consume a long time A differentapproach uses some global optimization techniques for in-stance applying the PSO algorithm to choose the best ar-chitectures with minimum error As can be seen from thediscussion of the related literature most of the proposedmethods suffer from some limitations such as using certainfixed NHL a fixed NNPHL or the number of iterations Inmost of these methods the proposed approaches study theeffect of only one of these hyperparameters simultaneouslySome of the suggested algorithms are time-consuming andhave a high computational cost ampe particle swarm opti-mization was very usefully employed in finding optimalparameters [49] For that in our earlier work [50] the effect ofNHL and NNPHL on the performance of ANN using PSOwas studied and discussed

ampe purpose of the proposed model is to provide apotential solution for the problem of designing the beststructure for multilayer ANN (MLANN) via selecting thebest configuration for the network for example choosingthe best hyperparameters including NHL NNPHL

Scientific Programming 5

ACTFUN and the learning rate maximizing predictionaccuracy and minimizing processing time

Now we modify the standard PSO by adding the ldquocenterof gravity (CoG)rdquo concept which can be characterized as amean of gravities calculated by their distances from a ref-erenced center A new algorithm called PSO with center ofgravity (PSOCoG) is proposed in this paper as described inthe following paragraphs Assume that there are Ng singlegravitational points 1 Ng ampe movement of thesegravitational individuals is identified by appointing theirlocation vectors s1

rarr sNg

rarr as follows sirarr

(t) where i 1 Ng CoG is a position in a gravities system where thelocation vector S

rarris determined utilizing the gravities gi and

location vectors sirarr in this way

Srarr

1G

1113944

Ng

i1gi times si

rarr (4)

G 1113944

Ng

i1g (5)

where G is the sum of gravities and Ng is the number ofgravities [51]

In this technique an efficient CoG-particle (Xcg) issuggested Xcg will participate in speeding up the conver-gence of the model in fewer repetitions It helps in finding acloser solution to the optimal and enhancing the quality ofthe solution ampe CoG-particle is a virtual member in theswarm used to express the swarm at each repetition Inaddition CoG-particle is weighted via the values of a fitnessfunction for the swarm members It has no speed and doesnot involve any of the standard swarm memberrsquos tasks likefitness valuation searching for the optimal solution ampeweighted center of the swarm can be determined as followsthrough considering the swarm individuals as a group ofgravity points and making the value of the target function ofeach individual match the gravity

Xcg(t) 1

Fcg

1113944

Ng

i1F xi( 1113857 times xi(t) (6)

where F(xi) is the fitness value of member i at location xi(t)

and Fcg is the sum of the values of fitness for all membersUsing formula (7) Fcg can be determined similar to the sumof gravities G in formula (5) ampe fitness value F(xi) formaximum optimization has been given in formula (8) whileformula (9) is used for minimum optimization

Fcg 1113944

Ng

i1F xi( 1113857 (7)

F xi( 1113857 f xi( 1113857 (8)

F xi( 1113857 1

f xi( 1113857 + ε ε 0 (9)

where f(xi) is the objective function and its range supposedto be positive in this situation

ampe updated speed formula for each member can becreated via equation (10) by computing Xcg the best globalsolution at the whole swarm level Pgs and the best localsolution detected by every individual Pil as follows

v(t+1)id wv

(t)id + c1r1

P(t)ild + X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠

+ c2r2P

(t)gsd minus X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠ i 1 Ng

(10)

ampen the modified location for any individual in theswarm is calculated using

X(t+1)id v

(t+1)id + X

(t)id i 1 Ng (11)

where the particle moves in a multidimensional space ampesearching cycle proceeds till the termination condition istrue ampe idea behind that is to make such a CoG-particle(Xcg) the convergence of swarm individuals towards theoptimal global solution and help them move away from thelocal solution It may be possibly clarified by explaining thefunction of the second and third parts in (10) ampe term(P

(t)ild + X(t)

cg 2 minus X(t)id ) is guidable for the attractiveness of the

present location of the individual in direction of the averageof the positive direction of its best local solution (+Pild) andthe positive direction of the location of CoG-particle (+Xcg)which assists the personal experience term to move awayfrom the local solutions while the term (P

(t)gsd minus X(t)

cg 2 minus

X(t)id ) is guidable for the attractiveness of the present location

of the swarmmember in the direction of the average positivedirection towards the best global solution (+Pgs) and thenegative direction of the location of CoG-particle (minusXcg)which assists keeping the efficiency of the population di-versity (exploration and exploitation process) through thesearching operation ampis increments the probability ofrapidly approaching global solutions (or near-global solu-tions) where the CoG-particle will pull in swarm membersto the area of best-discovered promising solutions Conse-quently that offers individuals the optimal opportunity totake the location of the global best-discovered candidatesolution through the discovery phase All earlier motions arepropped by linearly decreasing weight which allows theability to adjust the population diversity through thesearching operation

ampe proposed prediction model used a new PSOCoGtechnique to train ANN and the proposed model called theANN-PSOCoG model In the ANN-PSOCoG model a 4-dimensions search space was used ampe first dimension is thenumber of HL the second dimension is NPHL the thirddimension is the type of activation function (ACTFUN) andthe fourth dimension is the learning rate Any particle insidethe discovery area is considered as a candidate solution ampismeans that the location of the particle determines the valuesof NHL NNPHL ACTFUN and LR which represent apossible configuration for the network In the search phasethe PSO technique is used to find the best settings by flyingthe particles within a bounded search space Each particle

6 Scientific Programming

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 2: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

the stock exchange is a complex process because stock pricesdepend on several factors and are greatly affected by rumors[5] ampe stock markets attracted researchersrsquo attention toanalyze and predict future values to guide the investors tomake correct buying or selling decisions and achieve the bestprofit with the minimum risk

Due to the significant need for price prediction in thestock market many methods were presented to achieve thisaimampere is also a growing interest to employ AI techniquesto forecast the prices in the stock market One of the mostpreferred and common in use methods is ANN that can beused purely or combined with other statistical methods [6]In some cases of complex nonlinear systems like stockmarkets ANN fails to learn the actual processes associatedwith the corresponding target variable Also the ANNmodel is found to be efficient in some cases and the sameANN model may be inefficient in other cases

Many researchers have been studying stock price pre-diction for years some of these studies have improved theexisting models and some have further processed the dataampese studies are not perfect some of the models are toocomplex and some of the processing procedures are time-consuming ampese shortcomings will increase the methodsrsquoinstability and limit the application and extension of theresearch results

ampe traditional stock forecasting methods could not fitand analyze the highly nonlinear and multifactor stockmarket well so there are low prediction accuracy and slowtraining speed problems [7] amperefore recent studies suchas fusion in stock market prediction [6] financial tradingstrategy system [8] and short-term stock market price trendpredictions [9] try to find solutions to problems Some ofthese problems are (1) inaccurate prediction and (2) taking along time as a result of incorrectly selecting the hyper-parameters values of the neural network When using ar-tificial neural networks in forecasting the selection ofhyperparameter values is crucial to construct the artificialneural networkrsquos best topology to achieve high accuracyprediction of prices with the minimum possible error

Hyperparameters are defined as the parameters thatshould be tuned before the beginning of the ANN trainingprocess ampe hyperparameters for ANN technique includefor example the number of hidden layers (NHL) number ofneurons per hidden layer (NNPHL) activation function(ACTFUN) type and learning rate (LR) ampese hyper-parameters have the highest priority in designing an ANNand have a massive impact on ANNrsquos performance Con-sequently multilayers ANN (MLANN) term brings first tomind multilayered and multinoded complex architecturesLearning of ANN is also thought to have taken its powerfrom this complexity NHL and NNPHL are variables whichdetermine the network structure ampey are the most mys-terious hyperparameters that increase the number of pa-rameters to be trained with Since NHL and NNPHL are thequantities that affect the number of parameters to be trained(weight bias) the most optimal model ie ldquothe mostcomplex model with least parametersrdquo must be establishedin both time and learning [10 11] Increasing or decreasingthe number of hidden layers or the number of hidden nodes

might improve the accuracy or might not ampis depends onthe complexity of the problem where inappropriate selec-tion of these hyperparameters causes underfitting of over-fitting problems [12] For example ANNs are universalfunction approximators and for them to learn to approxi-mate a function or to predict a task they need to haveenough lsquocapacityrsquo to learn the function

NNPHL is the main measure of the learning modelcapacity For a simple function it might need a fewernumber of hidden unitsampemore complex the function themore learning capacity the model will need Increasing thenumber of units by small number is not a problem but amuch larger number will lead to overfitting for instanceproviding a model with too much capacity it might tend tooverfitting problem trying to ldquomemorizerdquo the dataset af-fecting the capacity to generalize [13] As seen it is veryimportant to select the best values of these hyperparameters

In neural networks (NNs) activation functions are usedto transfer the values from one layer to another ampe mo-tivation for the activation functions defined by multilayerneural networks is to compose simple transformations toobtain highly nonlinear ones In particular MLPs composeaffine transformations and elementwise nonlinearities Withthe appropriate choice of parameters multilayer neuralnetworks can in principle approximate any smooth func-tion with more hidden units allowing one to achieve betterapproximations Also activation functions are applied afterevery layer and need to be calculated millions of times insome cases Hence they are computationally inexpensive tocalculate [12] An inappropriate selection of the activationfunction can lead to the loss of information of the inputduring forwarding propagation and the exponential van-ishingexploding of gradients during backpropagation or itcan lead to high computational expenses [14]

Learning rate is one of the most important hyper-parameters to tune ANN to achieve better performanceLearning rate determines the step size at each training it-eration while moving towards an optimum of a loss func-tion ampe amount that the weights and bias parameters areupdated is known as the learning rate If the learning rate(LR) model is way smaller than the optimal value it leads themodel to learn slowly up to hundreds or thousands ofepochs to reach an ideal state LR that is too small may getstuck in an undesirable local minimum and overfitting mayoccur On the other hand if the learning rate is much largerthan the optimal value it leads the model to learn muchfaster but it would overshoot the ideal state and the al-gorithm might not converge ampis might cause undesirabledivergent behavior in loss function [15]

ampe determination of the best hyperparameters dependson selecting suitable parameters that have to be tuned ampeart of the networkrsquos hyperparameters optimization amountsto ending up at the balance point between underfitting andoverfitting After implementing the optimal value for eachhyperparameter in an iterative way the optimal hyper-parameters will build the best architecture for ANN ampisnew model can improve the accuracy of future forecasteddata and reduce the prediction processrsquos consuming timeHowever this procedure remains a challenging task

2 Scientific Programming

Choosing inappropriate values for these variables leadsto building an inappropriate network structure and givesinaccurate results that require many iterations to improveperformance which consumes a lot of time ampese shortageslead to the need for hybridization ampe hybrid model inte-grates two or more ANN and other models to overcomecertain standalone ANN modelsrsquo limitations and henceincrease prediction accuracy and reduce time ampe hybridmodel is used for modeling complex target variables thatinvolve different processes Generally the hybrid modelconsists of a decision-making integrated with a learningmodel ampe decision-making model helps ANN to select thebest hyperparameters values For that various techniquescan be used as decision-making models such as particleswarm optimization (PSO) [16] backpropagation [17] ge-netic algorithms [18] ant colony optimization [19] beeswarm optimization (BSO) [20] Tabu search [21] and fuzzysystems [22]

As stated before ANNs have already been employed forthe prediction of closing price in stock markets Butstandalone ANN has several limitations resulting in thelower accuracy of the prediction results However thelimitation of ANN is resolved using hybrid models

In this paper using a physical concept called the centerof gravity (CoG) an enhanced PSO algorithm called PSOwith the center of gravity (PSOCoG) is proposed PSOCoGwas used to select the best values of ANNrsquos hyperparametersthe resulting model is called ANN-PSOCoG ampe proposedANN-PSOCoG model was used to forecast the closing pricein the stock market ampe effect of COVID-19 on ANN-PSOCoG was also studied In the proposed ANN-PSOCoGhybrid model the new enhanced algorithm (PSOCoG) isused as a decision-making model and ANN is used as alearning model

ampe essential contributions of this paper can be sum-marized as follows

(1) We design a generic model that can be used to selectthe best values of hyperparameters for an ANN

(2) We find the best configuration for ANN in terms ofNHL NNPHL ACTFUN and LR using the pro-posed PSOCoG for ANN

(3) ampe proposed model can be utilized to estimate theclosing price using different historical data of stockmarket indices with high accuracy and minimumerror

(4) We compare the proposed model with traditionalANN the standard PSO (SPSO) and ANN withstandard PSO models

(5) We study the efficiency of the proposed model underthe effect of COVID-19

ampe rest of the paper is organized as follows Section 2introduces the background of standard particle swarm opti-mization and the stock market A literature review is sum-marized in Section 3ampe description of the proposed techniqueis given in Section 4ampe performance evaluation is presented inSection 5 followed by a conclusion and futurework in Section 6

2 Background

For the paper to be self-content throughout the followingsubsections a brief review on ANN SPSO and stock marketis introduced

21 Artificial Neural Networks (ANNs) A classical ANNconsists of three related layers the input layer the hiddenlayer and the output layer ampe number of units in the inputand output layers depends on the input and output datarsquossize While the input units receive original data from theoutside world and provide it to the ANN no processing isexecuted in any of the input units Instead these units passon information to the hidden units ampe hidden nodesperform data processing and transfer information from theinput units to the output units ampe information is thenprocessed and transmitted by output units from the ANN tothe outside world

Figure 1 shows a model for artificial neuron In thismodel there are various inputs to the neural networkx0 x1 x2 xn which represent one single observationLet Wij (W1j W2j Wnj) be a weight vector of thejthunit of the hidden layer and b be a bias value that permitsmoving the transfer function up or down the output iscreated through an activation function by the summation ofmultiplication of each input with the related weight vectorMathematically this can be expressed in the followingequation

x1W1j + x1W2j + x1W3j + middot middot middot + x1Wnj 1113944 xiWij

(1)

After the output is produced an activation function(φ1113936 xiWij) is used to transmit the information to theoutside worldampe activation functions are used to get theoutput of the node It is used to determine the output ofANN such as yes or no ampe activation functions map theresulting values to a range of values between 0 and 1 or -1to 1 and so on depending upon the type of functionused

ampe activation functions can be basically divided intolinear activation function and nonlinear activation func-tions ampere are many types of activation functions such aslog-sigmoid (logsig) softmaxrsquo tan-sigmoid (tansig) and thelinear activation function (purelin) ampe most commonlyused activation functions for multilayer networks are log-sigmoid (logsig) softmaxrsquo tan-sigmoid (tansig) and thelinear activation function (purelin) [23]

ANN systems work in a different way where the inputdata are combined to predict the output data ampese systemsare optimized in a way to allow for the computation oferrors To compute errors in these systems the expectedANNrsquos outputs are compared to the real targets ampesesystems have iteration ability where errors can be computedand estimated again leading to decrease in errors until thesmallest possible error is found

Scientific Programming 3

22 Standard Particle Swarm Optimization (SPSO) SPSOwas presented by Kennedy and Eberhart [24] SPSO mimicsthe simple behavior of organisms and the local cooperationwith the environment and neighborrsquos organisms to developbehaviors used for solving complex problems such as opti-mization issues PSO algorithm has many advantages com-pared to different swarm intelligence (SI) techniques Forinstance its search procedure is simple effective and easy toimplement It can effectively find the best global solutionswith high accuracy PSO is a population-based search pro-cedure in which each individual procedure represents aparticle ie a possible solution ampese particles are groupedinto a swarmampe particlesmoving within amultidimensionalspace adapt their locations depending on their experience andneighbors ampe principle of the PSO technique can beexplained [25] as follows Let Xi(t) (x

(t)i1 x

(t)i2 x

(t)i d)

denote the position particlei and Vi(t) (v(t)i1 v

(t)i2 v

(t)i d)

denote the velocity of a particlei in the searching area at atime-step t Also let Pi (pi1 pi2 pid) and Pg (pg1 pg2 pgd) indicate the preferable choice established by theparticle itself and by the swarm respectivelyampe new locationof the particle is updated as follows

V(t+1)id wV

(t)id + c1r1 Pid minus X

(t)id1113872 1113873

+ c2r2 Pgd minus X(t)id1113872 1113873 i 1 2 N

(2)

X(t+1)id X

(t)id + V

(t+1)id i 1 2 N (3)

where the particlemoves in amultidimensional space c1 and c2are positive constantsr1 and r2 are random numbers in therange [0 1] and w is the inertia weight Vi(t) controls theimprovement process and represents both the personal ex-perience of the particle and swapped knowledge from the othersurrounded particles ampe personal experience of a particle isusually indicated as a cognitive term and it represents theapproaching from the best local position ampe swappedknowledge is indicated as a social term in (2) which representsthe approaching from the best global position for the swarm

23 Stock Market ampe stock market is an exchange wheresecurities and shares are traded at a price organized by therequest and supply [26] According to [27] the stock market

is defined by investorsrsquo reaction which is made by the in-formation that is associated with the ldquoreal valuerdquo of com-panies ampe stock market is one of the most important waysof building wealth Stocks are the cornerstone of any in-vestment portfolio

ampe most significant stock market data called technicaldata including the closing price the high price of a tradingday the low price of a trading day and the volume of thestock market shares traded per day [28] A stock index is ameasure of a stock market that enables investors to comparebetween the new and old prices to know the market per-formance [29] ampere are many common indices such asStandard amp Poorrsquos 500 (SampP 500) National Association ofSecurities Dealers Automated Quotations 100 (NASDAQ-100) Dow Jones Industrial Average (DJIA) Gold andCanadian Dollar to United State Dollar (CADUSD) usedwith the stock market [30] ampe historical data of each ofthese indices including the technical data can be used as adataset to train the ANN

3 Related Work

A survey of the relevant work shows that there are manystudies about forecasting the stock marketrsquos future valuesampe recent technological advances that support AI have ledto the appearance of a new trend of investing In this regardthe buy and sell decisions can be made by treating big dataamounts of historical data for stock market indices anddetermining the risks related to them with high accuracyMany models fall within this trend such as (1) a dynamictrading rule based on filtered flag pattern recognition forstock market price forecasting [31] (2) stock market tradingrule based on pattern recognition and technical analysis [32](3) intelligent pattern recognition model for supportinginvestment decisions in stock market [33] (4) using supportvector machine with a hybrid feature selectionmethod to thestock trend prediction [34] (5) forecasting stock markettrend using machine learning algorithms [35] and (6) deeplearning for stock market prediction [36] For example theauthor in [37] was the first to attempt to use ANN in order tomodel the economic time series of IBM company ampismethod tries to understand the nonlinear regularity forchanges in asset prices like day-to-day variations Howeverthe scope of the work was limited as the author used afeedforward neural network with only one hidden layer withfive hidden units in it without considering the ability to usedvariant numbers of hidden layers or hidden nodes Also theauthor did not study the effect of the activation function orthe learning rate

In another study [38] the authors proposed a newforecasting method to predict Bitcoinrsquos future price In thismethod PSO was used to select the best values for thenumber of hidden units the input lag space and output lagspace of the Nonlinear Autoregressive with ExogeneousInputs model ampe results showed the ability of the model topredict Bitcoin prices accurately It is worth noting that theauthors of [38] did not discuss the impact of the NHL theACTFUN and the learning rate on the performance of thealgorithm However the proposed technique considered

b

W0

W1 fOutput

activationfunction

W2

X0

X1

X2

Wn

b + X0W0 + X1W1 + +XnWn

Synapticweights

Xn

Input

1 Bias

sum

Figure 1 Artificial neuron model

4 Scientific Programming

NHL and used different values for NHL that allows selectingthe best hyperparameters of ANN ampe proposed model alsostudied the impact of the activation function and the LR

A flexible neural tree (FNT) ensemble technique isproposed in [39] ampe authors in [39] developed a trustedtechnique to model the attitude of the seemingly chaoticstock markets ampey used a tree structure based evolutionaryalgorithm and PSO algorithms to optimize the structure andparameters of FNT Nasdaq-100 index and the SampP CNXNIFTY stock index are used to evaluate their techniqueAuthors of [39] did not consider the impact of the depth ofthe neural tree that is NHL ampey also did not study theimpact of other hyperparameters such as ACTFUN the LRand the number of children at each level in other words thenumber of hidden nodes

In [40] the author presented a method based on Mul-tilayer Perceptron and Long Short-Term Memory Networksto predict stock market indices ampe historical data of theBombay Stock Exchange (BSE) Sensex from the Indian Stockmarket were used to evaluate the method In [40] the se-lected NN architecture was verified manually not auto-matically during a trial-and-error process ampe author alsoused only one hidden layer and changed only the number ofhidden units per hidden layer without considering the ac-tivation function and the learning rate However the pro-posed technique selected a suitable structure from allpossible structures that increase the probability to select theoptimal structure and used multiple hidden layers andmultiple nodes per hidden layer ampe proposed techniquechanged NHL and NNPHL to adjust selecting the optimalstructure and considered the effect of ACTFUN and the LR

In another work the authors of [41] suggested an en-hanced PSO to train the Sigmoid Diagonal Recurrent NeuralNetworks (SDRNNs) to optimize the parameters of SDRNNampe historical date of NASDAQ-100 and SampP 500 stockmarket indices was used to evaluate their modelampe authorsin [41] did not discuss selection of the best structure of thenetwork in terms of NHL NNPHL ACTFUN and LR

ampe authors in [42] presented a new stock marketprediction method and applied the firefly algorithm with anevolutionary method ampey implemented their method tothe Online Sequential Extreme Learning Machine (OSELM)ampeir modelrsquos performance was evaluated using the datasetsof BSE Sensex NSE Sensex SampP 500 and FTSE indices ampeauthor of [42] did not optimize the hyperparameters of thenetwork

In [43] the authors suggested a new end-to-end hybridneural network approach to forecast the stock marketindexrsquos price direction ampis approach depends on learningmultiple time scale features extracted from the daily priceand CNN network to perform the prediction using com-pletely linked layers that bring together features learned bythe Long Short-Term Memory network ampe historical dataof the SampP 500 index was used to evaluate this approachampeauthor of [43] did not study the effect of optimization of thehyperparameters on the networkrsquos performance

ampe authors of [44] proposed a hybrid model to forecastthe Nikkei 225 index price for the Tokyo stock market ampehybrid model used ANN and genetic algorithms to optimize

the accuracy of stock price prediction ANN was trainedusing a backpropagation as a learning algorithm In [44] theauthors used ANN with only one hidden layer and did notconsider the effect of hyperparameters of ANN such as NHLNNPHL ACTFUN and LR while in [45] the authorsproposed an adaptive system to predict the price in the stockmarket ampis system uses the PSO algorithm to overcome theproblems of the backpropagation approach and to trainingANN to forecast the price of the SampP 500 Index and theNASDAQ Composite Index therefore helping the investorsto make correct trading decisions ampey also used ANN withonly one hidden layer with a fixed NNPHL equal to 2N-1where N is the number of inputs ampey did not consider theeffect of other hyperparameters such as ACTFUN and LR

In [46] the authors studied the different modificationson PSO and its application over the stock market to predictthe prices In another work [47] the PSO algorithm selectsthe optimally weighted signals set of trading signals thatspecify buy order sell order or hold order depending onevolutional learning from the New York Stock Exchange(NYSE) and the Stock Exchange of ampailand (SET)

In [48] the authors proposed an effective predictionmodel using PSO to forecast the SampP 500 and DJIA StockIndices for the short and long term ampis model applies anadaptive linear combiner (ALC) whereas PSO modifies itsweights ampe authors compared their model with the MLP-based model ampe results show that their model better thanthe MLP-based model in terms of accuracy and trainingtime

4 The Proposed Technique

ampe neural network parametersrsquo determination to create anappropriate architecture of the neural network that producesoutput with accepted error is very time- and cost-consumingTo determine the architecture of the network the configu-ration is generally carried out by hand in a trial-and-errorfashion However the manual tuning of the hyperparametersof ANN through the trial-and-error process and findingaccurate configurations consume a long time A differentapproach uses some global optimization techniques for in-stance applying the PSO algorithm to choose the best ar-chitectures with minimum error As can be seen from thediscussion of the related literature most of the proposedmethods suffer from some limitations such as using certainfixed NHL a fixed NNPHL or the number of iterations Inmost of these methods the proposed approaches study theeffect of only one of these hyperparameters simultaneouslySome of the suggested algorithms are time-consuming andhave a high computational cost ampe particle swarm opti-mization was very usefully employed in finding optimalparameters [49] For that in our earlier work [50] the effect ofNHL and NNPHL on the performance of ANN using PSOwas studied and discussed

ampe purpose of the proposed model is to provide apotential solution for the problem of designing the beststructure for multilayer ANN (MLANN) via selecting thebest configuration for the network for example choosingthe best hyperparameters including NHL NNPHL

Scientific Programming 5

ACTFUN and the learning rate maximizing predictionaccuracy and minimizing processing time

Now we modify the standard PSO by adding the ldquocenterof gravity (CoG)rdquo concept which can be characterized as amean of gravities calculated by their distances from a ref-erenced center A new algorithm called PSO with center ofgravity (PSOCoG) is proposed in this paper as described inthe following paragraphs Assume that there are Ng singlegravitational points 1 Ng ampe movement of thesegravitational individuals is identified by appointing theirlocation vectors s1

rarr sNg

rarr as follows sirarr

(t) where i 1 Ng CoG is a position in a gravities system where thelocation vector S

rarris determined utilizing the gravities gi and

location vectors sirarr in this way

Srarr

1G

1113944

Ng

i1gi times si

rarr (4)

G 1113944

Ng

i1g (5)

where G is the sum of gravities and Ng is the number ofgravities [51]

In this technique an efficient CoG-particle (Xcg) issuggested Xcg will participate in speeding up the conver-gence of the model in fewer repetitions It helps in finding acloser solution to the optimal and enhancing the quality ofthe solution ampe CoG-particle is a virtual member in theswarm used to express the swarm at each repetition Inaddition CoG-particle is weighted via the values of a fitnessfunction for the swarm members It has no speed and doesnot involve any of the standard swarm memberrsquos tasks likefitness valuation searching for the optimal solution ampeweighted center of the swarm can be determined as followsthrough considering the swarm individuals as a group ofgravity points and making the value of the target function ofeach individual match the gravity

Xcg(t) 1

Fcg

1113944

Ng

i1F xi( 1113857 times xi(t) (6)

where F(xi) is the fitness value of member i at location xi(t)

and Fcg is the sum of the values of fitness for all membersUsing formula (7) Fcg can be determined similar to the sumof gravities G in formula (5) ampe fitness value F(xi) formaximum optimization has been given in formula (8) whileformula (9) is used for minimum optimization

Fcg 1113944

Ng

i1F xi( 1113857 (7)

F xi( 1113857 f xi( 1113857 (8)

F xi( 1113857 1

f xi( 1113857 + ε ε 0 (9)

where f(xi) is the objective function and its range supposedto be positive in this situation

ampe updated speed formula for each member can becreated via equation (10) by computing Xcg the best globalsolution at the whole swarm level Pgs and the best localsolution detected by every individual Pil as follows

v(t+1)id wv

(t)id + c1r1

P(t)ild + X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠

+ c2r2P

(t)gsd minus X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠ i 1 Ng

(10)

ampen the modified location for any individual in theswarm is calculated using

X(t+1)id v

(t+1)id + X

(t)id i 1 Ng (11)

where the particle moves in a multidimensional space ampesearching cycle proceeds till the termination condition istrue ampe idea behind that is to make such a CoG-particle(Xcg) the convergence of swarm individuals towards theoptimal global solution and help them move away from thelocal solution It may be possibly clarified by explaining thefunction of the second and third parts in (10) ampe term(P

(t)ild + X(t)

cg 2 minus X(t)id ) is guidable for the attractiveness of the

present location of the individual in direction of the averageof the positive direction of its best local solution (+Pild) andthe positive direction of the location of CoG-particle (+Xcg)which assists the personal experience term to move awayfrom the local solutions while the term (P

(t)gsd minus X(t)

cg 2 minus

X(t)id ) is guidable for the attractiveness of the present location

of the swarmmember in the direction of the average positivedirection towards the best global solution (+Pgs) and thenegative direction of the location of CoG-particle (minusXcg)which assists keeping the efficiency of the population di-versity (exploration and exploitation process) through thesearching operation ampis increments the probability ofrapidly approaching global solutions (or near-global solu-tions) where the CoG-particle will pull in swarm membersto the area of best-discovered promising solutions Conse-quently that offers individuals the optimal opportunity totake the location of the global best-discovered candidatesolution through the discovery phase All earlier motions arepropped by linearly decreasing weight which allows theability to adjust the population diversity through thesearching operation

ampe proposed prediction model used a new PSOCoGtechnique to train ANN and the proposed model called theANN-PSOCoG model In the ANN-PSOCoG model a 4-dimensions search space was used ampe first dimension is thenumber of HL the second dimension is NPHL the thirddimension is the type of activation function (ACTFUN) andthe fourth dimension is the learning rate Any particle insidethe discovery area is considered as a candidate solution ampismeans that the location of the particle determines the valuesof NHL NNPHL ACTFUN and LR which represent apossible configuration for the network In the search phasethe PSO technique is used to find the best settings by flyingthe particles within a bounded search space Each particle

6 Scientific Programming

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 3: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

Choosing inappropriate values for these variables leadsto building an inappropriate network structure and givesinaccurate results that require many iterations to improveperformance which consumes a lot of time ampese shortageslead to the need for hybridization ampe hybrid model inte-grates two or more ANN and other models to overcomecertain standalone ANN modelsrsquo limitations and henceincrease prediction accuracy and reduce time ampe hybridmodel is used for modeling complex target variables thatinvolve different processes Generally the hybrid modelconsists of a decision-making integrated with a learningmodel ampe decision-making model helps ANN to select thebest hyperparameters values For that various techniquescan be used as decision-making models such as particleswarm optimization (PSO) [16] backpropagation [17] ge-netic algorithms [18] ant colony optimization [19] beeswarm optimization (BSO) [20] Tabu search [21] and fuzzysystems [22]

As stated before ANNs have already been employed forthe prediction of closing price in stock markets Butstandalone ANN has several limitations resulting in thelower accuracy of the prediction results However thelimitation of ANN is resolved using hybrid models

In this paper using a physical concept called the centerof gravity (CoG) an enhanced PSO algorithm called PSOwith the center of gravity (PSOCoG) is proposed PSOCoGwas used to select the best values of ANNrsquos hyperparametersthe resulting model is called ANN-PSOCoG ampe proposedANN-PSOCoG model was used to forecast the closing pricein the stock market ampe effect of COVID-19 on ANN-PSOCoG was also studied In the proposed ANN-PSOCoGhybrid model the new enhanced algorithm (PSOCoG) isused as a decision-making model and ANN is used as alearning model

ampe essential contributions of this paper can be sum-marized as follows

(1) We design a generic model that can be used to selectthe best values of hyperparameters for an ANN

(2) We find the best configuration for ANN in terms ofNHL NNPHL ACTFUN and LR using the pro-posed PSOCoG for ANN

(3) ampe proposed model can be utilized to estimate theclosing price using different historical data of stockmarket indices with high accuracy and minimumerror

(4) We compare the proposed model with traditionalANN the standard PSO (SPSO) and ANN withstandard PSO models

(5) We study the efficiency of the proposed model underthe effect of COVID-19

ampe rest of the paper is organized as follows Section 2introduces the background of standard particle swarm opti-mization and the stock market A literature review is sum-marized in Section 3ampe description of the proposed techniqueis given in Section 4ampe performance evaluation is presented inSection 5 followed by a conclusion and futurework in Section 6

2 Background

For the paper to be self-content throughout the followingsubsections a brief review on ANN SPSO and stock marketis introduced

21 Artificial Neural Networks (ANNs) A classical ANNconsists of three related layers the input layer the hiddenlayer and the output layer ampe number of units in the inputand output layers depends on the input and output datarsquossize While the input units receive original data from theoutside world and provide it to the ANN no processing isexecuted in any of the input units Instead these units passon information to the hidden units ampe hidden nodesperform data processing and transfer information from theinput units to the output units ampe information is thenprocessed and transmitted by output units from the ANN tothe outside world

Figure 1 shows a model for artificial neuron In thismodel there are various inputs to the neural networkx0 x1 x2 xn which represent one single observationLet Wij (W1j W2j Wnj) be a weight vector of thejthunit of the hidden layer and b be a bias value that permitsmoving the transfer function up or down the output iscreated through an activation function by the summation ofmultiplication of each input with the related weight vectorMathematically this can be expressed in the followingequation

x1W1j + x1W2j + x1W3j + middot middot middot + x1Wnj 1113944 xiWij

(1)

After the output is produced an activation function(φ1113936 xiWij) is used to transmit the information to theoutside worldampe activation functions are used to get theoutput of the node It is used to determine the output ofANN such as yes or no ampe activation functions map theresulting values to a range of values between 0 and 1 or -1to 1 and so on depending upon the type of functionused

ampe activation functions can be basically divided intolinear activation function and nonlinear activation func-tions ampere are many types of activation functions such aslog-sigmoid (logsig) softmaxrsquo tan-sigmoid (tansig) and thelinear activation function (purelin) ampe most commonlyused activation functions for multilayer networks are log-sigmoid (logsig) softmaxrsquo tan-sigmoid (tansig) and thelinear activation function (purelin) [23]

ANN systems work in a different way where the inputdata are combined to predict the output data ampese systemsare optimized in a way to allow for the computation oferrors To compute errors in these systems the expectedANNrsquos outputs are compared to the real targets ampesesystems have iteration ability where errors can be computedand estimated again leading to decrease in errors until thesmallest possible error is found

Scientific Programming 3

22 Standard Particle Swarm Optimization (SPSO) SPSOwas presented by Kennedy and Eberhart [24] SPSO mimicsthe simple behavior of organisms and the local cooperationwith the environment and neighborrsquos organisms to developbehaviors used for solving complex problems such as opti-mization issues PSO algorithm has many advantages com-pared to different swarm intelligence (SI) techniques Forinstance its search procedure is simple effective and easy toimplement It can effectively find the best global solutionswith high accuracy PSO is a population-based search pro-cedure in which each individual procedure represents aparticle ie a possible solution ampese particles are groupedinto a swarmampe particlesmoving within amultidimensionalspace adapt their locations depending on their experience andneighbors ampe principle of the PSO technique can beexplained [25] as follows Let Xi(t) (x

(t)i1 x

(t)i2 x

(t)i d)

denote the position particlei and Vi(t) (v(t)i1 v

(t)i2 v

(t)i d)

denote the velocity of a particlei in the searching area at atime-step t Also let Pi (pi1 pi2 pid) and Pg (pg1 pg2 pgd) indicate the preferable choice established by theparticle itself and by the swarm respectivelyampe new locationof the particle is updated as follows

V(t+1)id wV

(t)id + c1r1 Pid minus X

(t)id1113872 1113873

+ c2r2 Pgd minus X(t)id1113872 1113873 i 1 2 N

(2)

X(t+1)id X

(t)id + V

(t+1)id i 1 2 N (3)

where the particlemoves in amultidimensional space c1 and c2are positive constantsr1 and r2 are random numbers in therange [0 1] and w is the inertia weight Vi(t) controls theimprovement process and represents both the personal ex-perience of the particle and swapped knowledge from the othersurrounded particles ampe personal experience of a particle isusually indicated as a cognitive term and it represents theapproaching from the best local position ampe swappedknowledge is indicated as a social term in (2) which representsthe approaching from the best global position for the swarm

23 Stock Market ampe stock market is an exchange wheresecurities and shares are traded at a price organized by therequest and supply [26] According to [27] the stock market

is defined by investorsrsquo reaction which is made by the in-formation that is associated with the ldquoreal valuerdquo of com-panies ampe stock market is one of the most important waysof building wealth Stocks are the cornerstone of any in-vestment portfolio

ampe most significant stock market data called technicaldata including the closing price the high price of a tradingday the low price of a trading day and the volume of thestock market shares traded per day [28] A stock index is ameasure of a stock market that enables investors to comparebetween the new and old prices to know the market per-formance [29] ampere are many common indices such asStandard amp Poorrsquos 500 (SampP 500) National Association ofSecurities Dealers Automated Quotations 100 (NASDAQ-100) Dow Jones Industrial Average (DJIA) Gold andCanadian Dollar to United State Dollar (CADUSD) usedwith the stock market [30] ampe historical data of each ofthese indices including the technical data can be used as adataset to train the ANN

3 Related Work

A survey of the relevant work shows that there are manystudies about forecasting the stock marketrsquos future valuesampe recent technological advances that support AI have ledto the appearance of a new trend of investing In this regardthe buy and sell decisions can be made by treating big dataamounts of historical data for stock market indices anddetermining the risks related to them with high accuracyMany models fall within this trend such as (1) a dynamictrading rule based on filtered flag pattern recognition forstock market price forecasting [31] (2) stock market tradingrule based on pattern recognition and technical analysis [32](3) intelligent pattern recognition model for supportinginvestment decisions in stock market [33] (4) using supportvector machine with a hybrid feature selectionmethod to thestock trend prediction [34] (5) forecasting stock markettrend using machine learning algorithms [35] and (6) deeplearning for stock market prediction [36] For example theauthor in [37] was the first to attempt to use ANN in order tomodel the economic time series of IBM company ampismethod tries to understand the nonlinear regularity forchanges in asset prices like day-to-day variations Howeverthe scope of the work was limited as the author used afeedforward neural network with only one hidden layer withfive hidden units in it without considering the ability to usedvariant numbers of hidden layers or hidden nodes Also theauthor did not study the effect of the activation function orthe learning rate

In another study [38] the authors proposed a newforecasting method to predict Bitcoinrsquos future price In thismethod PSO was used to select the best values for thenumber of hidden units the input lag space and output lagspace of the Nonlinear Autoregressive with ExogeneousInputs model ampe results showed the ability of the model topredict Bitcoin prices accurately It is worth noting that theauthors of [38] did not discuss the impact of the NHL theACTFUN and the learning rate on the performance of thealgorithm However the proposed technique considered

b

W0

W1 fOutput

activationfunction

W2

X0

X1

X2

Wn

b + X0W0 + X1W1 + +XnWn

Synapticweights

Xn

Input

1 Bias

sum

Figure 1 Artificial neuron model

4 Scientific Programming

NHL and used different values for NHL that allows selectingthe best hyperparameters of ANN ampe proposed model alsostudied the impact of the activation function and the LR

A flexible neural tree (FNT) ensemble technique isproposed in [39] ampe authors in [39] developed a trustedtechnique to model the attitude of the seemingly chaoticstock markets ampey used a tree structure based evolutionaryalgorithm and PSO algorithms to optimize the structure andparameters of FNT Nasdaq-100 index and the SampP CNXNIFTY stock index are used to evaluate their techniqueAuthors of [39] did not consider the impact of the depth ofthe neural tree that is NHL ampey also did not study theimpact of other hyperparameters such as ACTFUN the LRand the number of children at each level in other words thenumber of hidden nodes

In [40] the author presented a method based on Mul-tilayer Perceptron and Long Short-Term Memory Networksto predict stock market indices ampe historical data of theBombay Stock Exchange (BSE) Sensex from the Indian Stockmarket were used to evaluate the method In [40] the se-lected NN architecture was verified manually not auto-matically during a trial-and-error process ampe author alsoused only one hidden layer and changed only the number ofhidden units per hidden layer without considering the ac-tivation function and the learning rate However the pro-posed technique selected a suitable structure from allpossible structures that increase the probability to select theoptimal structure and used multiple hidden layers andmultiple nodes per hidden layer ampe proposed techniquechanged NHL and NNPHL to adjust selecting the optimalstructure and considered the effect of ACTFUN and the LR

In another work the authors of [41] suggested an en-hanced PSO to train the Sigmoid Diagonal Recurrent NeuralNetworks (SDRNNs) to optimize the parameters of SDRNNampe historical date of NASDAQ-100 and SampP 500 stockmarket indices was used to evaluate their modelampe authorsin [41] did not discuss selection of the best structure of thenetwork in terms of NHL NNPHL ACTFUN and LR

ampe authors in [42] presented a new stock marketprediction method and applied the firefly algorithm with anevolutionary method ampey implemented their method tothe Online Sequential Extreme Learning Machine (OSELM)ampeir modelrsquos performance was evaluated using the datasetsof BSE Sensex NSE Sensex SampP 500 and FTSE indices ampeauthor of [42] did not optimize the hyperparameters of thenetwork

In [43] the authors suggested a new end-to-end hybridneural network approach to forecast the stock marketindexrsquos price direction ampis approach depends on learningmultiple time scale features extracted from the daily priceand CNN network to perform the prediction using com-pletely linked layers that bring together features learned bythe Long Short-Term Memory network ampe historical dataof the SampP 500 index was used to evaluate this approachampeauthor of [43] did not study the effect of optimization of thehyperparameters on the networkrsquos performance

ampe authors of [44] proposed a hybrid model to forecastthe Nikkei 225 index price for the Tokyo stock market ampehybrid model used ANN and genetic algorithms to optimize

the accuracy of stock price prediction ANN was trainedusing a backpropagation as a learning algorithm In [44] theauthors used ANN with only one hidden layer and did notconsider the effect of hyperparameters of ANN such as NHLNNPHL ACTFUN and LR while in [45] the authorsproposed an adaptive system to predict the price in the stockmarket ampis system uses the PSO algorithm to overcome theproblems of the backpropagation approach and to trainingANN to forecast the price of the SampP 500 Index and theNASDAQ Composite Index therefore helping the investorsto make correct trading decisions ampey also used ANN withonly one hidden layer with a fixed NNPHL equal to 2N-1where N is the number of inputs ampey did not consider theeffect of other hyperparameters such as ACTFUN and LR

In [46] the authors studied the different modificationson PSO and its application over the stock market to predictthe prices In another work [47] the PSO algorithm selectsthe optimally weighted signals set of trading signals thatspecify buy order sell order or hold order depending onevolutional learning from the New York Stock Exchange(NYSE) and the Stock Exchange of ampailand (SET)

In [48] the authors proposed an effective predictionmodel using PSO to forecast the SampP 500 and DJIA StockIndices for the short and long term ampis model applies anadaptive linear combiner (ALC) whereas PSO modifies itsweights ampe authors compared their model with the MLP-based model ampe results show that their model better thanthe MLP-based model in terms of accuracy and trainingtime

4 The Proposed Technique

ampe neural network parametersrsquo determination to create anappropriate architecture of the neural network that producesoutput with accepted error is very time- and cost-consumingTo determine the architecture of the network the configu-ration is generally carried out by hand in a trial-and-errorfashion However the manual tuning of the hyperparametersof ANN through the trial-and-error process and findingaccurate configurations consume a long time A differentapproach uses some global optimization techniques for in-stance applying the PSO algorithm to choose the best ar-chitectures with minimum error As can be seen from thediscussion of the related literature most of the proposedmethods suffer from some limitations such as using certainfixed NHL a fixed NNPHL or the number of iterations Inmost of these methods the proposed approaches study theeffect of only one of these hyperparameters simultaneouslySome of the suggested algorithms are time-consuming andhave a high computational cost ampe particle swarm opti-mization was very usefully employed in finding optimalparameters [49] For that in our earlier work [50] the effect ofNHL and NNPHL on the performance of ANN using PSOwas studied and discussed

ampe purpose of the proposed model is to provide apotential solution for the problem of designing the beststructure for multilayer ANN (MLANN) via selecting thebest configuration for the network for example choosingthe best hyperparameters including NHL NNPHL

Scientific Programming 5

ACTFUN and the learning rate maximizing predictionaccuracy and minimizing processing time

Now we modify the standard PSO by adding the ldquocenterof gravity (CoG)rdquo concept which can be characterized as amean of gravities calculated by their distances from a ref-erenced center A new algorithm called PSO with center ofgravity (PSOCoG) is proposed in this paper as described inthe following paragraphs Assume that there are Ng singlegravitational points 1 Ng ampe movement of thesegravitational individuals is identified by appointing theirlocation vectors s1

rarr sNg

rarr as follows sirarr

(t) where i 1 Ng CoG is a position in a gravities system where thelocation vector S

rarris determined utilizing the gravities gi and

location vectors sirarr in this way

Srarr

1G

1113944

Ng

i1gi times si

rarr (4)

G 1113944

Ng

i1g (5)

where G is the sum of gravities and Ng is the number ofgravities [51]

In this technique an efficient CoG-particle (Xcg) issuggested Xcg will participate in speeding up the conver-gence of the model in fewer repetitions It helps in finding acloser solution to the optimal and enhancing the quality ofthe solution ampe CoG-particle is a virtual member in theswarm used to express the swarm at each repetition Inaddition CoG-particle is weighted via the values of a fitnessfunction for the swarm members It has no speed and doesnot involve any of the standard swarm memberrsquos tasks likefitness valuation searching for the optimal solution ampeweighted center of the swarm can be determined as followsthrough considering the swarm individuals as a group ofgravity points and making the value of the target function ofeach individual match the gravity

Xcg(t) 1

Fcg

1113944

Ng

i1F xi( 1113857 times xi(t) (6)

where F(xi) is the fitness value of member i at location xi(t)

and Fcg is the sum of the values of fitness for all membersUsing formula (7) Fcg can be determined similar to the sumof gravities G in formula (5) ampe fitness value F(xi) formaximum optimization has been given in formula (8) whileformula (9) is used for minimum optimization

Fcg 1113944

Ng

i1F xi( 1113857 (7)

F xi( 1113857 f xi( 1113857 (8)

F xi( 1113857 1

f xi( 1113857 + ε ε 0 (9)

where f(xi) is the objective function and its range supposedto be positive in this situation

ampe updated speed formula for each member can becreated via equation (10) by computing Xcg the best globalsolution at the whole swarm level Pgs and the best localsolution detected by every individual Pil as follows

v(t+1)id wv

(t)id + c1r1

P(t)ild + X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠

+ c2r2P

(t)gsd minus X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠ i 1 Ng

(10)

ampen the modified location for any individual in theswarm is calculated using

X(t+1)id v

(t+1)id + X

(t)id i 1 Ng (11)

where the particle moves in a multidimensional space ampesearching cycle proceeds till the termination condition istrue ampe idea behind that is to make such a CoG-particle(Xcg) the convergence of swarm individuals towards theoptimal global solution and help them move away from thelocal solution It may be possibly clarified by explaining thefunction of the second and third parts in (10) ampe term(P

(t)ild + X(t)

cg 2 minus X(t)id ) is guidable for the attractiveness of the

present location of the individual in direction of the averageof the positive direction of its best local solution (+Pild) andthe positive direction of the location of CoG-particle (+Xcg)which assists the personal experience term to move awayfrom the local solutions while the term (P

(t)gsd minus X(t)

cg 2 minus

X(t)id ) is guidable for the attractiveness of the present location

of the swarmmember in the direction of the average positivedirection towards the best global solution (+Pgs) and thenegative direction of the location of CoG-particle (minusXcg)which assists keeping the efficiency of the population di-versity (exploration and exploitation process) through thesearching operation ampis increments the probability ofrapidly approaching global solutions (or near-global solu-tions) where the CoG-particle will pull in swarm membersto the area of best-discovered promising solutions Conse-quently that offers individuals the optimal opportunity totake the location of the global best-discovered candidatesolution through the discovery phase All earlier motions arepropped by linearly decreasing weight which allows theability to adjust the population diversity through thesearching operation

ampe proposed prediction model used a new PSOCoGtechnique to train ANN and the proposed model called theANN-PSOCoG model In the ANN-PSOCoG model a 4-dimensions search space was used ampe first dimension is thenumber of HL the second dimension is NPHL the thirddimension is the type of activation function (ACTFUN) andthe fourth dimension is the learning rate Any particle insidethe discovery area is considered as a candidate solution ampismeans that the location of the particle determines the valuesof NHL NNPHL ACTFUN and LR which represent apossible configuration for the network In the search phasethe PSO technique is used to find the best settings by flyingthe particles within a bounded search space Each particle

6 Scientific Programming

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 4: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

22 Standard Particle Swarm Optimization (SPSO) SPSOwas presented by Kennedy and Eberhart [24] SPSO mimicsthe simple behavior of organisms and the local cooperationwith the environment and neighborrsquos organisms to developbehaviors used for solving complex problems such as opti-mization issues PSO algorithm has many advantages com-pared to different swarm intelligence (SI) techniques Forinstance its search procedure is simple effective and easy toimplement It can effectively find the best global solutionswith high accuracy PSO is a population-based search pro-cedure in which each individual procedure represents aparticle ie a possible solution ampese particles are groupedinto a swarmampe particlesmoving within amultidimensionalspace adapt their locations depending on their experience andneighbors ampe principle of the PSO technique can beexplained [25] as follows Let Xi(t) (x

(t)i1 x

(t)i2 x

(t)i d)

denote the position particlei and Vi(t) (v(t)i1 v

(t)i2 v

(t)i d)

denote the velocity of a particlei in the searching area at atime-step t Also let Pi (pi1 pi2 pid) and Pg (pg1 pg2 pgd) indicate the preferable choice established by theparticle itself and by the swarm respectivelyampe new locationof the particle is updated as follows

V(t+1)id wV

(t)id + c1r1 Pid minus X

(t)id1113872 1113873

+ c2r2 Pgd minus X(t)id1113872 1113873 i 1 2 N

(2)

X(t+1)id X

(t)id + V

(t+1)id i 1 2 N (3)

where the particlemoves in amultidimensional space c1 and c2are positive constantsr1 and r2 are random numbers in therange [0 1] and w is the inertia weight Vi(t) controls theimprovement process and represents both the personal ex-perience of the particle and swapped knowledge from the othersurrounded particles ampe personal experience of a particle isusually indicated as a cognitive term and it represents theapproaching from the best local position ampe swappedknowledge is indicated as a social term in (2) which representsthe approaching from the best global position for the swarm

23 Stock Market ampe stock market is an exchange wheresecurities and shares are traded at a price organized by therequest and supply [26] According to [27] the stock market

is defined by investorsrsquo reaction which is made by the in-formation that is associated with the ldquoreal valuerdquo of com-panies ampe stock market is one of the most important waysof building wealth Stocks are the cornerstone of any in-vestment portfolio

ampe most significant stock market data called technicaldata including the closing price the high price of a tradingday the low price of a trading day and the volume of thestock market shares traded per day [28] A stock index is ameasure of a stock market that enables investors to comparebetween the new and old prices to know the market per-formance [29] ampere are many common indices such asStandard amp Poorrsquos 500 (SampP 500) National Association ofSecurities Dealers Automated Quotations 100 (NASDAQ-100) Dow Jones Industrial Average (DJIA) Gold andCanadian Dollar to United State Dollar (CADUSD) usedwith the stock market [30] ampe historical data of each ofthese indices including the technical data can be used as adataset to train the ANN

3 Related Work

A survey of the relevant work shows that there are manystudies about forecasting the stock marketrsquos future valuesampe recent technological advances that support AI have ledto the appearance of a new trend of investing In this regardthe buy and sell decisions can be made by treating big dataamounts of historical data for stock market indices anddetermining the risks related to them with high accuracyMany models fall within this trend such as (1) a dynamictrading rule based on filtered flag pattern recognition forstock market price forecasting [31] (2) stock market tradingrule based on pattern recognition and technical analysis [32](3) intelligent pattern recognition model for supportinginvestment decisions in stock market [33] (4) using supportvector machine with a hybrid feature selectionmethod to thestock trend prediction [34] (5) forecasting stock markettrend using machine learning algorithms [35] and (6) deeplearning for stock market prediction [36] For example theauthor in [37] was the first to attempt to use ANN in order tomodel the economic time series of IBM company ampismethod tries to understand the nonlinear regularity forchanges in asset prices like day-to-day variations Howeverthe scope of the work was limited as the author used afeedforward neural network with only one hidden layer withfive hidden units in it without considering the ability to usedvariant numbers of hidden layers or hidden nodes Also theauthor did not study the effect of the activation function orthe learning rate

In another study [38] the authors proposed a newforecasting method to predict Bitcoinrsquos future price In thismethod PSO was used to select the best values for thenumber of hidden units the input lag space and output lagspace of the Nonlinear Autoregressive with ExogeneousInputs model ampe results showed the ability of the model topredict Bitcoin prices accurately It is worth noting that theauthors of [38] did not discuss the impact of the NHL theACTFUN and the learning rate on the performance of thealgorithm However the proposed technique considered

b

W0

W1 fOutput

activationfunction

W2

X0

X1

X2

Wn

b + X0W0 + X1W1 + +XnWn

Synapticweights

Xn

Input

1 Bias

sum

Figure 1 Artificial neuron model

4 Scientific Programming

NHL and used different values for NHL that allows selectingthe best hyperparameters of ANN ampe proposed model alsostudied the impact of the activation function and the LR

A flexible neural tree (FNT) ensemble technique isproposed in [39] ampe authors in [39] developed a trustedtechnique to model the attitude of the seemingly chaoticstock markets ampey used a tree structure based evolutionaryalgorithm and PSO algorithms to optimize the structure andparameters of FNT Nasdaq-100 index and the SampP CNXNIFTY stock index are used to evaluate their techniqueAuthors of [39] did not consider the impact of the depth ofthe neural tree that is NHL ampey also did not study theimpact of other hyperparameters such as ACTFUN the LRand the number of children at each level in other words thenumber of hidden nodes

In [40] the author presented a method based on Mul-tilayer Perceptron and Long Short-Term Memory Networksto predict stock market indices ampe historical data of theBombay Stock Exchange (BSE) Sensex from the Indian Stockmarket were used to evaluate the method In [40] the se-lected NN architecture was verified manually not auto-matically during a trial-and-error process ampe author alsoused only one hidden layer and changed only the number ofhidden units per hidden layer without considering the ac-tivation function and the learning rate However the pro-posed technique selected a suitable structure from allpossible structures that increase the probability to select theoptimal structure and used multiple hidden layers andmultiple nodes per hidden layer ampe proposed techniquechanged NHL and NNPHL to adjust selecting the optimalstructure and considered the effect of ACTFUN and the LR

In another work the authors of [41] suggested an en-hanced PSO to train the Sigmoid Diagonal Recurrent NeuralNetworks (SDRNNs) to optimize the parameters of SDRNNampe historical date of NASDAQ-100 and SampP 500 stockmarket indices was used to evaluate their modelampe authorsin [41] did not discuss selection of the best structure of thenetwork in terms of NHL NNPHL ACTFUN and LR

ampe authors in [42] presented a new stock marketprediction method and applied the firefly algorithm with anevolutionary method ampey implemented their method tothe Online Sequential Extreme Learning Machine (OSELM)ampeir modelrsquos performance was evaluated using the datasetsof BSE Sensex NSE Sensex SampP 500 and FTSE indices ampeauthor of [42] did not optimize the hyperparameters of thenetwork

In [43] the authors suggested a new end-to-end hybridneural network approach to forecast the stock marketindexrsquos price direction ampis approach depends on learningmultiple time scale features extracted from the daily priceand CNN network to perform the prediction using com-pletely linked layers that bring together features learned bythe Long Short-Term Memory network ampe historical dataof the SampP 500 index was used to evaluate this approachampeauthor of [43] did not study the effect of optimization of thehyperparameters on the networkrsquos performance

ampe authors of [44] proposed a hybrid model to forecastthe Nikkei 225 index price for the Tokyo stock market ampehybrid model used ANN and genetic algorithms to optimize

the accuracy of stock price prediction ANN was trainedusing a backpropagation as a learning algorithm In [44] theauthors used ANN with only one hidden layer and did notconsider the effect of hyperparameters of ANN such as NHLNNPHL ACTFUN and LR while in [45] the authorsproposed an adaptive system to predict the price in the stockmarket ampis system uses the PSO algorithm to overcome theproblems of the backpropagation approach and to trainingANN to forecast the price of the SampP 500 Index and theNASDAQ Composite Index therefore helping the investorsto make correct trading decisions ampey also used ANN withonly one hidden layer with a fixed NNPHL equal to 2N-1where N is the number of inputs ampey did not consider theeffect of other hyperparameters such as ACTFUN and LR

In [46] the authors studied the different modificationson PSO and its application over the stock market to predictthe prices In another work [47] the PSO algorithm selectsthe optimally weighted signals set of trading signals thatspecify buy order sell order or hold order depending onevolutional learning from the New York Stock Exchange(NYSE) and the Stock Exchange of ampailand (SET)

In [48] the authors proposed an effective predictionmodel using PSO to forecast the SampP 500 and DJIA StockIndices for the short and long term ampis model applies anadaptive linear combiner (ALC) whereas PSO modifies itsweights ampe authors compared their model with the MLP-based model ampe results show that their model better thanthe MLP-based model in terms of accuracy and trainingtime

4 The Proposed Technique

ampe neural network parametersrsquo determination to create anappropriate architecture of the neural network that producesoutput with accepted error is very time- and cost-consumingTo determine the architecture of the network the configu-ration is generally carried out by hand in a trial-and-errorfashion However the manual tuning of the hyperparametersof ANN through the trial-and-error process and findingaccurate configurations consume a long time A differentapproach uses some global optimization techniques for in-stance applying the PSO algorithm to choose the best ar-chitectures with minimum error As can be seen from thediscussion of the related literature most of the proposedmethods suffer from some limitations such as using certainfixed NHL a fixed NNPHL or the number of iterations Inmost of these methods the proposed approaches study theeffect of only one of these hyperparameters simultaneouslySome of the suggested algorithms are time-consuming andhave a high computational cost ampe particle swarm opti-mization was very usefully employed in finding optimalparameters [49] For that in our earlier work [50] the effect ofNHL and NNPHL on the performance of ANN using PSOwas studied and discussed

ampe purpose of the proposed model is to provide apotential solution for the problem of designing the beststructure for multilayer ANN (MLANN) via selecting thebest configuration for the network for example choosingthe best hyperparameters including NHL NNPHL

Scientific Programming 5

ACTFUN and the learning rate maximizing predictionaccuracy and minimizing processing time

Now we modify the standard PSO by adding the ldquocenterof gravity (CoG)rdquo concept which can be characterized as amean of gravities calculated by their distances from a ref-erenced center A new algorithm called PSO with center ofgravity (PSOCoG) is proposed in this paper as described inthe following paragraphs Assume that there are Ng singlegravitational points 1 Ng ampe movement of thesegravitational individuals is identified by appointing theirlocation vectors s1

rarr sNg

rarr as follows sirarr

(t) where i 1 Ng CoG is a position in a gravities system where thelocation vector S

rarris determined utilizing the gravities gi and

location vectors sirarr in this way

Srarr

1G

1113944

Ng

i1gi times si

rarr (4)

G 1113944

Ng

i1g (5)

where G is the sum of gravities and Ng is the number ofgravities [51]

In this technique an efficient CoG-particle (Xcg) issuggested Xcg will participate in speeding up the conver-gence of the model in fewer repetitions It helps in finding acloser solution to the optimal and enhancing the quality ofthe solution ampe CoG-particle is a virtual member in theswarm used to express the swarm at each repetition Inaddition CoG-particle is weighted via the values of a fitnessfunction for the swarm members It has no speed and doesnot involve any of the standard swarm memberrsquos tasks likefitness valuation searching for the optimal solution ampeweighted center of the swarm can be determined as followsthrough considering the swarm individuals as a group ofgravity points and making the value of the target function ofeach individual match the gravity

Xcg(t) 1

Fcg

1113944

Ng

i1F xi( 1113857 times xi(t) (6)

where F(xi) is the fitness value of member i at location xi(t)

and Fcg is the sum of the values of fitness for all membersUsing formula (7) Fcg can be determined similar to the sumof gravities G in formula (5) ampe fitness value F(xi) formaximum optimization has been given in formula (8) whileformula (9) is used for minimum optimization

Fcg 1113944

Ng

i1F xi( 1113857 (7)

F xi( 1113857 f xi( 1113857 (8)

F xi( 1113857 1

f xi( 1113857 + ε ε 0 (9)

where f(xi) is the objective function and its range supposedto be positive in this situation

ampe updated speed formula for each member can becreated via equation (10) by computing Xcg the best globalsolution at the whole swarm level Pgs and the best localsolution detected by every individual Pil as follows

v(t+1)id wv

(t)id + c1r1

P(t)ild + X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠

+ c2r2P

(t)gsd minus X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠ i 1 Ng

(10)

ampen the modified location for any individual in theswarm is calculated using

X(t+1)id v

(t+1)id + X

(t)id i 1 Ng (11)

where the particle moves in a multidimensional space ampesearching cycle proceeds till the termination condition istrue ampe idea behind that is to make such a CoG-particle(Xcg) the convergence of swarm individuals towards theoptimal global solution and help them move away from thelocal solution It may be possibly clarified by explaining thefunction of the second and third parts in (10) ampe term(P

(t)ild + X(t)

cg 2 minus X(t)id ) is guidable for the attractiveness of the

present location of the individual in direction of the averageof the positive direction of its best local solution (+Pild) andthe positive direction of the location of CoG-particle (+Xcg)which assists the personal experience term to move awayfrom the local solutions while the term (P

(t)gsd minus X(t)

cg 2 minus

X(t)id ) is guidable for the attractiveness of the present location

of the swarmmember in the direction of the average positivedirection towards the best global solution (+Pgs) and thenegative direction of the location of CoG-particle (minusXcg)which assists keeping the efficiency of the population di-versity (exploration and exploitation process) through thesearching operation ampis increments the probability ofrapidly approaching global solutions (or near-global solu-tions) where the CoG-particle will pull in swarm membersto the area of best-discovered promising solutions Conse-quently that offers individuals the optimal opportunity totake the location of the global best-discovered candidatesolution through the discovery phase All earlier motions arepropped by linearly decreasing weight which allows theability to adjust the population diversity through thesearching operation

ampe proposed prediction model used a new PSOCoGtechnique to train ANN and the proposed model called theANN-PSOCoG model In the ANN-PSOCoG model a 4-dimensions search space was used ampe first dimension is thenumber of HL the second dimension is NPHL the thirddimension is the type of activation function (ACTFUN) andthe fourth dimension is the learning rate Any particle insidethe discovery area is considered as a candidate solution ampismeans that the location of the particle determines the valuesof NHL NNPHL ACTFUN and LR which represent apossible configuration for the network In the search phasethe PSO technique is used to find the best settings by flyingthe particles within a bounded search space Each particle

6 Scientific Programming

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 5: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

NHL and used different values for NHL that allows selectingthe best hyperparameters of ANN ampe proposed model alsostudied the impact of the activation function and the LR

A flexible neural tree (FNT) ensemble technique isproposed in [39] ampe authors in [39] developed a trustedtechnique to model the attitude of the seemingly chaoticstock markets ampey used a tree structure based evolutionaryalgorithm and PSO algorithms to optimize the structure andparameters of FNT Nasdaq-100 index and the SampP CNXNIFTY stock index are used to evaluate their techniqueAuthors of [39] did not consider the impact of the depth ofthe neural tree that is NHL ampey also did not study theimpact of other hyperparameters such as ACTFUN the LRand the number of children at each level in other words thenumber of hidden nodes

In [40] the author presented a method based on Mul-tilayer Perceptron and Long Short-Term Memory Networksto predict stock market indices ampe historical data of theBombay Stock Exchange (BSE) Sensex from the Indian Stockmarket were used to evaluate the method In [40] the se-lected NN architecture was verified manually not auto-matically during a trial-and-error process ampe author alsoused only one hidden layer and changed only the number ofhidden units per hidden layer without considering the ac-tivation function and the learning rate However the pro-posed technique selected a suitable structure from allpossible structures that increase the probability to select theoptimal structure and used multiple hidden layers andmultiple nodes per hidden layer ampe proposed techniquechanged NHL and NNPHL to adjust selecting the optimalstructure and considered the effect of ACTFUN and the LR

In another work the authors of [41] suggested an en-hanced PSO to train the Sigmoid Diagonal Recurrent NeuralNetworks (SDRNNs) to optimize the parameters of SDRNNampe historical date of NASDAQ-100 and SampP 500 stockmarket indices was used to evaluate their modelampe authorsin [41] did not discuss selection of the best structure of thenetwork in terms of NHL NNPHL ACTFUN and LR

ampe authors in [42] presented a new stock marketprediction method and applied the firefly algorithm with anevolutionary method ampey implemented their method tothe Online Sequential Extreme Learning Machine (OSELM)ampeir modelrsquos performance was evaluated using the datasetsof BSE Sensex NSE Sensex SampP 500 and FTSE indices ampeauthor of [42] did not optimize the hyperparameters of thenetwork

In [43] the authors suggested a new end-to-end hybridneural network approach to forecast the stock marketindexrsquos price direction ampis approach depends on learningmultiple time scale features extracted from the daily priceand CNN network to perform the prediction using com-pletely linked layers that bring together features learned bythe Long Short-Term Memory network ampe historical dataof the SampP 500 index was used to evaluate this approachampeauthor of [43] did not study the effect of optimization of thehyperparameters on the networkrsquos performance

ampe authors of [44] proposed a hybrid model to forecastthe Nikkei 225 index price for the Tokyo stock market ampehybrid model used ANN and genetic algorithms to optimize

the accuracy of stock price prediction ANN was trainedusing a backpropagation as a learning algorithm In [44] theauthors used ANN with only one hidden layer and did notconsider the effect of hyperparameters of ANN such as NHLNNPHL ACTFUN and LR while in [45] the authorsproposed an adaptive system to predict the price in the stockmarket ampis system uses the PSO algorithm to overcome theproblems of the backpropagation approach and to trainingANN to forecast the price of the SampP 500 Index and theNASDAQ Composite Index therefore helping the investorsto make correct trading decisions ampey also used ANN withonly one hidden layer with a fixed NNPHL equal to 2N-1where N is the number of inputs ampey did not consider theeffect of other hyperparameters such as ACTFUN and LR

In [46] the authors studied the different modificationson PSO and its application over the stock market to predictthe prices In another work [47] the PSO algorithm selectsthe optimally weighted signals set of trading signals thatspecify buy order sell order or hold order depending onevolutional learning from the New York Stock Exchange(NYSE) and the Stock Exchange of ampailand (SET)

In [48] the authors proposed an effective predictionmodel using PSO to forecast the SampP 500 and DJIA StockIndices for the short and long term ampis model applies anadaptive linear combiner (ALC) whereas PSO modifies itsweights ampe authors compared their model with the MLP-based model ampe results show that their model better thanthe MLP-based model in terms of accuracy and trainingtime

4 The Proposed Technique

ampe neural network parametersrsquo determination to create anappropriate architecture of the neural network that producesoutput with accepted error is very time- and cost-consumingTo determine the architecture of the network the configu-ration is generally carried out by hand in a trial-and-errorfashion However the manual tuning of the hyperparametersof ANN through the trial-and-error process and findingaccurate configurations consume a long time A differentapproach uses some global optimization techniques for in-stance applying the PSO algorithm to choose the best ar-chitectures with minimum error As can be seen from thediscussion of the related literature most of the proposedmethods suffer from some limitations such as using certainfixed NHL a fixed NNPHL or the number of iterations Inmost of these methods the proposed approaches study theeffect of only one of these hyperparameters simultaneouslySome of the suggested algorithms are time-consuming andhave a high computational cost ampe particle swarm opti-mization was very usefully employed in finding optimalparameters [49] For that in our earlier work [50] the effect ofNHL and NNPHL on the performance of ANN using PSOwas studied and discussed

ampe purpose of the proposed model is to provide apotential solution for the problem of designing the beststructure for multilayer ANN (MLANN) via selecting thebest configuration for the network for example choosingthe best hyperparameters including NHL NNPHL

Scientific Programming 5

ACTFUN and the learning rate maximizing predictionaccuracy and minimizing processing time

Now we modify the standard PSO by adding the ldquocenterof gravity (CoG)rdquo concept which can be characterized as amean of gravities calculated by their distances from a ref-erenced center A new algorithm called PSO with center ofgravity (PSOCoG) is proposed in this paper as described inthe following paragraphs Assume that there are Ng singlegravitational points 1 Ng ampe movement of thesegravitational individuals is identified by appointing theirlocation vectors s1

rarr sNg

rarr as follows sirarr

(t) where i 1 Ng CoG is a position in a gravities system where thelocation vector S

rarris determined utilizing the gravities gi and

location vectors sirarr in this way

Srarr

1G

1113944

Ng

i1gi times si

rarr (4)

G 1113944

Ng

i1g (5)

where G is the sum of gravities and Ng is the number ofgravities [51]

In this technique an efficient CoG-particle (Xcg) issuggested Xcg will participate in speeding up the conver-gence of the model in fewer repetitions It helps in finding acloser solution to the optimal and enhancing the quality ofthe solution ampe CoG-particle is a virtual member in theswarm used to express the swarm at each repetition Inaddition CoG-particle is weighted via the values of a fitnessfunction for the swarm members It has no speed and doesnot involve any of the standard swarm memberrsquos tasks likefitness valuation searching for the optimal solution ampeweighted center of the swarm can be determined as followsthrough considering the swarm individuals as a group ofgravity points and making the value of the target function ofeach individual match the gravity

Xcg(t) 1

Fcg

1113944

Ng

i1F xi( 1113857 times xi(t) (6)

where F(xi) is the fitness value of member i at location xi(t)

and Fcg is the sum of the values of fitness for all membersUsing formula (7) Fcg can be determined similar to the sumof gravities G in formula (5) ampe fitness value F(xi) formaximum optimization has been given in formula (8) whileformula (9) is used for minimum optimization

Fcg 1113944

Ng

i1F xi( 1113857 (7)

F xi( 1113857 f xi( 1113857 (8)

F xi( 1113857 1

f xi( 1113857 + ε ε 0 (9)

where f(xi) is the objective function and its range supposedto be positive in this situation

ampe updated speed formula for each member can becreated via equation (10) by computing Xcg the best globalsolution at the whole swarm level Pgs and the best localsolution detected by every individual Pil as follows

v(t+1)id wv

(t)id + c1r1

P(t)ild + X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠

+ c2r2P

(t)gsd minus X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠ i 1 Ng

(10)

ampen the modified location for any individual in theswarm is calculated using

X(t+1)id v

(t+1)id + X

(t)id i 1 Ng (11)

where the particle moves in a multidimensional space ampesearching cycle proceeds till the termination condition istrue ampe idea behind that is to make such a CoG-particle(Xcg) the convergence of swarm individuals towards theoptimal global solution and help them move away from thelocal solution It may be possibly clarified by explaining thefunction of the second and third parts in (10) ampe term(P

(t)ild + X(t)

cg 2 minus X(t)id ) is guidable for the attractiveness of the

present location of the individual in direction of the averageof the positive direction of its best local solution (+Pild) andthe positive direction of the location of CoG-particle (+Xcg)which assists the personal experience term to move awayfrom the local solutions while the term (P

(t)gsd minus X(t)

cg 2 minus

X(t)id ) is guidable for the attractiveness of the present location

of the swarmmember in the direction of the average positivedirection towards the best global solution (+Pgs) and thenegative direction of the location of CoG-particle (minusXcg)which assists keeping the efficiency of the population di-versity (exploration and exploitation process) through thesearching operation ampis increments the probability ofrapidly approaching global solutions (or near-global solu-tions) where the CoG-particle will pull in swarm membersto the area of best-discovered promising solutions Conse-quently that offers individuals the optimal opportunity totake the location of the global best-discovered candidatesolution through the discovery phase All earlier motions arepropped by linearly decreasing weight which allows theability to adjust the population diversity through thesearching operation

ampe proposed prediction model used a new PSOCoGtechnique to train ANN and the proposed model called theANN-PSOCoG model In the ANN-PSOCoG model a 4-dimensions search space was used ampe first dimension is thenumber of HL the second dimension is NPHL the thirddimension is the type of activation function (ACTFUN) andthe fourth dimension is the learning rate Any particle insidethe discovery area is considered as a candidate solution ampismeans that the location of the particle determines the valuesof NHL NNPHL ACTFUN and LR which represent apossible configuration for the network In the search phasethe PSO technique is used to find the best settings by flyingthe particles within a bounded search space Each particle

6 Scientific Programming

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 6: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

ACTFUN and the learning rate maximizing predictionaccuracy and minimizing processing time

Now we modify the standard PSO by adding the ldquocenterof gravity (CoG)rdquo concept which can be characterized as amean of gravities calculated by their distances from a ref-erenced center A new algorithm called PSO with center ofgravity (PSOCoG) is proposed in this paper as described inthe following paragraphs Assume that there are Ng singlegravitational points 1 Ng ampe movement of thesegravitational individuals is identified by appointing theirlocation vectors s1

rarr sNg

rarr as follows sirarr

(t) where i 1 Ng CoG is a position in a gravities system where thelocation vector S

rarris determined utilizing the gravities gi and

location vectors sirarr in this way

Srarr

1G

1113944

Ng

i1gi times si

rarr (4)

G 1113944

Ng

i1g (5)

where G is the sum of gravities and Ng is the number ofgravities [51]

In this technique an efficient CoG-particle (Xcg) issuggested Xcg will participate in speeding up the conver-gence of the model in fewer repetitions It helps in finding acloser solution to the optimal and enhancing the quality ofthe solution ampe CoG-particle is a virtual member in theswarm used to express the swarm at each repetition Inaddition CoG-particle is weighted via the values of a fitnessfunction for the swarm members It has no speed and doesnot involve any of the standard swarm memberrsquos tasks likefitness valuation searching for the optimal solution ampeweighted center of the swarm can be determined as followsthrough considering the swarm individuals as a group ofgravity points and making the value of the target function ofeach individual match the gravity

Xcg(t) 1

Fcg

1113944

Ng

i1F xi( 1113857 times xi(t) (6)

where F(xi) is the fitness value of member i at location xi(t)

and Fcg is the sum of the values of fitness for all membersUsing formula (7) Fcg can be determined similar to the sumof gravities G in formula (5) ampe fitness value F(xi) formaximum optimization has been given in formula (8) whileformula (9) is used for minimum optimization

Fcg 1113944

Ng

i1F xi( 1113857 (7)

F xi( 1113857 f xi( 1113857 (8)

F xi( 1113857 1

f xi( 1113857 + ε ε 0 (9)

where f(xi) is the objective function and its range supposedto be positive in this situation

ampe updated speed formula for each member can becreated via equation (10) by computing Xcg the best globalsolution at the whole swarm level Pgs and the best localsolution detected by every individual Pil as follows

v(t+1)id wv

(t)id + c1r1

P(t)ild + X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠

+ c2r2P

(t)gsd minus X

(t)cg

2minus X

(t)id

⎛⎝ ⎞⎠ i 1 Ng

(10)

ampen the modified location for any individual in theswarm is calculated using

X(t+1)id v

(t+1)id + X

(t)id i 1 Ng (11)

where the particle moves in a multidimensional space ampesearching cycle proceeds till the termination condition istrue ampe idea behind that is to make such a CoG-particle(Xcg) the convergence of swarm individuals towards theoptimal global solution and help them move away from thelocal solution It may be possibly clarified by explaining thefunction of the second and third parts in (10) ampe term(P

(t)ild + X(t)

cg 2 minus X(t)id ) is guidable for the attractiveness of the

present location of the individual in direction of the averageof the positive direction of its best local solution (+Pild) andthe positive direction of the location of CoG-particle (+Xcg)which assists the personal experience term to move awayfrom the local solutions while the term (P

(t)gsd minus X(t)

cg 2 minus

X(t)id ) is guidable for the attractiveness of the present location

of the swarmmember in the direction of the average positivedirection towards the best global solution (+Pgs) and thenegative direction of the location of CoG-particle (minusXcg)which assists keeping the efficiency of the population di-versity (exploration and exploitation process) through thesearching operation ampis increments the probability ofrapidly approaching global solutions (or near-global solu-tions) where the CoG-particle will pull in swarm membersto the area of best-discovered promising solutions Conse-quently that offers individuals the optimal opportunity totake the location of the global best-discovered candidatesolution through the discovery phase All earlier motions arepropped by linearly decreasing weight which allows theability to adjust the population diversity through thesearching operation

ampe proposed prediction model used a new PSOCoGtechnique to train ANN and the proposed model called theANN-PSOCoG model In the ANN-PSOCoG model a 4-dimensions search space was used ampe first dimension is thenumber of HL the second dimension is NPHL the thirddimension is the type of activation function (ACTFUN) andthe fourth dimension is the learning rate Any particle insidethe discovery area is considered as a candidate solution ampismeans that the location of the particle determines the valuesof NHL NNPHL ACTFUN and LR which represent apossible configuration for the network In the search phasethe PSO technique is used to find the best settings by flyingthe particles within a bounded search space Each particle

6 Scientific Programming

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 7: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

has its attributes which are location speed and fitness valuecalculated by a fitness function ampe particlersquos speed definesthe next movement (direction and traveled distance) ampefitness value represents an index for the convergence of theparticle from the solution ampe position of each particle isupdated to approach towards the individual that has anoptimal location according to (10) and (11) In every rep-etition every particle in the swarm modifies its speed andlocation based on two terms the first is the individualoptimal solution which is the solution that the particle canget personalampe second is the global optimal solution that isthe solution that the swarm can obtain cooperatively tillnow ampe suggested technique used several particles (pop)which are initialized randomly For each particle the cor-responding ANN is created and evaluated using a fitnessfunction shown in

fi 1m

1113944

m

1expected outputi minus actual outputi( 1113857

2 (12)

where m is the number of inputs for ANNampe fitness values of all swarm members are determined

using (12) For each particle the location was stored as Xiand the fitness value was stored as Pil Among all Pil Pil withminimum fitness value is selected as the global best particle(Pgs) then the location and fitness value of Pgs are storedampis process is repeated by updating the particle positionsand velocities according to (10) and (11) ampe process isiterated till the best solution is found or the maximumnumber of iterations (maxite) is reached ampe global bestparticle represents the best selected hyperparameters that areused to build ANN Algorithm1 explains the suggestedmodel

5 Results and Performance Evaluation

In this section the evaluation of the performance of the newmodel is presented ampe dataset and settings of the

experiment are described in detail and the results of theproposed model are discussed

51 ExperimentalDatasets ampe historical data of Standardrsquosand Poorrsquos 500 (SampP 500) [52] and Dow Jones IndustrialAverage (DJIA) [53] were used as the datasets for the stockmarket prediction experiments to evaluate the ANN-PSOCoG model

ampe 500 stocks that highlight the performance of large-cap entities selected by leading economists and weighted bymarket values are referred to as the Standard and Poorrsquos 500Index ampe SampP 500 Index or the Standard amp Poorrsquos 500Index is a market-capitalization-weighted index of 500 of thelargest publicly traded companies in the US stock marketampe SampP 500 is a leading indicator and one of the mostprevalent benchmarks and regarded as the best gauge oflarge-cap US equities [54]

ampe Dow Jones Industrial Average (DJIA) is a stockmarket index that measures the stock performance of 30large companies listed on stock exchanges in the UnitedStates such as Apple Inc Cisco Systems ampe Coca-ColaCompany IBM Intel and Nike It is considered one of themost commonly followed equity indices in the USA Two-thirds of the DJIArsquos companies are manufacturers of in-dustrial and consumer goods while the others representdiverse industries Besides longevity two other factors play arole in its widespread popularity It is understandable tomost people and it reliably indicates themarketrsquos basic trend[55]

To evaluate the proposed model we used the historicaldata of SampP 500 dataset where the overall number of ob-servations for the exchange indices was 3776 trading daysfrom August 16 2005 to August 16 2020 Also we usedhistorical data of DJIA dataset where the overall number ofobservations for the exchange indices was 9036 trading daysfrom Jan 28 1985 to Dec 02 2020 Each observation in-cludes the opening price highest price lowest price the

(1) Start(2) Setting up the swarm members randomly(3) Run NN corresponding to each particle(4) while (termination condition is not met)(5) fori 1 to pop(6) Compute the fitness value (fi) for each particle(7) if (the fitness value is less than Pil)(8) Set current fitness value as the new (Pil)(9) endif(10) select the minimum (fi) of all particles in swarm as Pgs (11) for d 1 to D(12) Calculate V

(t+1)id from Equation (10)

(13) Calculate X(t+1)id from Equation (11)

(14) end-for(15) end-for(16) return the best particle with gbest(17) end-while(18) Run the NN corresponding to the best selected hyperparameters 19 end-algorithm

ALGORITHM 1 ampe proposed model

Scientific Programming 7

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 8: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

closing price and the total volume of the stocks traded perday ampe opening price highest price lowest price and thetotal volume of the stocks traded per day were used as inputsof the artificial neural network while the closing price wasused as output for ANN In all scenarios discussed in thispaper the data are divided as 80 of the available data thatwere used as a training set while 10 of the available datawere used as a validation set and the last 10 of the availabledata were used as a test set

52 Parameter Settings To run the proposed methodPSOCoG and multilayer neural network settings were set asthe swarm size (pop) 20 and the maximum number ofiterations (maxite) 100 Besides the following settings areused

(i) ampe values of c1 and c2 were set to 2 [56](ii) ampe value of Wmin was set to 04 [56](iii) ampe value of Wmax was set to 09 [56](iv) ampe inertia weight (W) was linearly decreased from

Wmax to Wmin [57](v) 4-dimensional search space (HL and NPHL

ACTFUN and LR) is used to represent thehyperparameter of ANN

(vi) Initialization ranges for HL were set to [1 7]according to [44 58]

(vii) NPHL was set to [14 21] according to [59 60](viii) ACTFUN was set to rsquologsigrsquorsquosoftmaxrsquo

rsquotansigrsquorsquopurelinrsquo(ix) LR was set to and [001 09] [61](x) LevenbergndashMarquardt algorithm [62] was used as

the train function for the network

In this study Mean absolute percentage error (MAPE)mean absolute error (MAE) relative error (RE) and mean-square error (MSE) shown in (13) to (16) [63] are used toevaluate the accuracy of the proposed model

MAPE 1n

1113944

n

i1

actual outputi minus expected outputiactual outputi

11138681113868111386811138681113868111386811138681113868

11138681113868111386811138681113868111386811138681113868times 100

(13)

MAE 1n

1113944

n

i1actual outputi minus expected outputi

11138681113868111386811138681113868111386811138681113868 (14)

REi actual outputi minus expected outputi

actual outputi (15)

MSE 1n

1113944

n

1actual outputi minus expected outputi( 1113857

2 (16)

53 Results and Discussion To evaluate the performance ofthe proposed model and verify its effectiveness severalsimulations were conducted in four different scenarios

(1) In the first scenario ANN was used(2) In the second scenario the standard PSO (SPSO) was

used(3) In the third scenario the assemble of SPSOwith CoG

(SPSOCoG) was used(4) In the fourth scenario the assemble of ANN with

SPOS was used(5) In the fourth scenario the proposed model ANN-

PSOCoG was used

ANN-PSOCoG also is compared with other models thatis ANN SPSO and ANN-SPSO Finally the impact ofCOVID-19 on the proposed model was studied

ampe experiments were executed using a PC with Win-dows 10 operating system and Intel Core i5 processorrunning at 330GHz and 8GB of RAM

531 Scenario 1 ANN Experiments In this scenario onlythe artificial neural networks were used to predict the closingprice ampe historical data of Standardrsquos and Poorrsquos 500 (SampP500) were used as the dataset to train the ANN Since theLevenbergndashMarquardt function has a fast convergence rateit was used to train the proposed artificial neural network

To evaluate this model MAPE MAE and RE were cal-culated ampe computed MAPE for this model is 0007 whileMAE is 0087 Figure 2(a) shows the actual closing price(ACP) and the expected closing price for this model whileFigure 2(b) shows the relative error (RE) for the ANNmodel

532 Scenario 2 Standard PSO (SPSO) Experiments In thisscenario only standard particle swarm optimization (SPSO)was used to predict the closing price using historical data ofSampP 500 ampe size of the swarm was 20 and the maximumnumber of iterations was 100 ampe values of the other pa-rameters were set as mentioned above MAPE for thismethod is 00053 while MAE for this method is 0070ampe comparison between the actual closing price and ex-pected closing price using SPSO (ECP-SPSO) is shown inFigure 3(a) while the relative error for this method is shownin Figure 3(b)

533 Scenario 3 SPSOCoG Experiments In this scenariothe standard particle swarm optimization (SPSO) with CoGconcept was used to predict the closing price using historicaldata of SampP 500 ampe size of the swarm was 20 and themaximum number of iterations was 100 ampe values of theother parameters were set as mentioned above MAPE for thismethod is 00038 whileMAE for this method is 0067ampecomparison between the actual closing price and expectedclosing price using SPSOCoG (ECP-SPSOCoG) is shown inFigure 4(a) while the relative error for this method is shownin Figure 4(b)

534 Scenario 4 ANN-SPSO Experiments In this scenarioa combination of the artificial neural network and thestandard particle swarm optimization (ANN-SPSO) was

8 Scientific Programming

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 9: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

used to predict the closing price where SPSO was used totrain the network to select the best configurations Datasetof historical data of SampP 500 was used in this scenarioMAPE for ANN-SPSO is turned to be 00027 whileMAE for ANN-SPSO equals 0047 Figure 5(a) depictsACP and expected closing price for ANN-SPSO (ECP-ANN-SPSO) while RE for this model is shown inFigure 5(b)

535 Scenario 5 ANN-PSOCoG Experiments In this sce-nario to design the best architecture of the ANN theproposed model (ANN-PSOCoG) was running with thechosen settings to select the best hyperparameters for thenetwork in terms of the HL NPHL ACTFUN and LRDataset of historical data of SampP 500 was used to train thenetwork ampe result of running the ANN-PSOCoG modeldisplays that the best configuration was chosen by particle 7

0 500 1000 1500 2000 2500 3000 3500 4000Days

500100015002000250030003500

45004000

Clos

ing

pric

e

ACPECP-NN

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

03

02

025

015

01

005

RE

(b)

Figure 2 ANN model performance (a) ACP versus ECP-ANN (b) ampe relative error for ANN model

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

025

02

015

01

005

RE

(b)

Figure 3 SPSO model performance (a) ACP versus ECP-SPSO (b) ampe relative error for SPSO

500

1500

2500

3500

4500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-SPSOCoG

(a)

1000 2000 3000 40000Days

0

01

02

03

RE

(b)

Figure 4 SPSOCoG model performance (a) ACP versus ECP-SPSOCoG (b) ampe relative error for SPSOCoG

Scientific Programming 9

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 10: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

with MPAE equal to 000024 and MAE equal to 000017ampe prediction accuracy of ANN-PSOCoG is very high asseen from MPAE and MAE values in which they are verysmall and close to zero the corresponding hyperparametersie NHL was equal to 6 the NNPHL was equal to 21 theselected activation function was purelin and the learningrate was equal to 05469

Although the process of determining the besthyperparameters of the ANN consumes a long time [64]the elapsed time to find the best configuration in theANN-PSOCoG model was only 51082822 seconds ANN-PSOCoG can be considered in light of this efficient interms of processing time

ampe value of regression (R) shows the correlation betweenthe forecasted outputs and targets ampe regression plot ofANN-PSOCoG is shown in Figure 6 ampe value of the re-gression coefficient (R) for training was 099995 validation was099992 and test was 099994 while for all was 099994 In thiscase the prediction accuracy of ANN-PSOCoG can be con-sidered very high since the forecasted output via the proposedmodel ANN-PSOCoG is almost equal to the target Figure 7(a)displays the actual closing price and expected closing priceusing ANN-PSOCoG (ECP-ANN-PSOCoG) ampe relativeerror of the proposed model is shown in Figure 7(b) As can beseen in the figure the proposed model predicts the closingprice with a very high accuracy that is close to the actual closingprice Based on the above discussion it is noted that theproposed model is able to give investors the trust to make thecorrect decisions to buy or sell shares that achieve the largestpossible profit and avoid risk and loss ampe mean absolutepercentage error (MAPE) represents the risk in the stockmarket and MAPE represents the fluctuation of ECP aroundACP So if the fluctuation of the price increases the error ofprediction increases and the risk of the prediction of theclosing price for the stock market increases

Besides MAPE for ANN-PSOCoG was very small so therisk in the proposed model is very small and predictionaccuracy is very high

536 Comparison of ANN-PSOCoG with ANN SPSPSPSOCoG and ANN-SPSO To study the efficiency of theANN-PSOCoG model a comparison of this model with the

ANN-SPSO SPSOCoG SPSO and ANN was carried outusing historical data of SampP 500 and DJIA datasets Fig-ure 8 shows the comparison between all modelsrsquo expectedclosing price and the actual closing price using the his-torical data of SampP 500 As can be seen in the figure ANN-PSOCoG outperformed ANN-SPSO in terms of predictionaccuracy by approximately 13 while it outperformedSPSOCoG by about 17 also it outperformed SPSO byabout 20 and ANN-PSOCoG outperformed ANN byapproximately 25

Figure 9(a) shows the comparison between all modelsrsquoexpected closing price and the actual closing price using thehistorical data of DJIA dataset As can be seen in the figureANN-PSOCoG outperformed ANN-SPSO in terms ofprediction accuracy by approximately 18 while it out-performed SPSOCoG by about 24 also it outperformedSPSO by about 33 and ANN-PSOCoG outperformed ANNby approximately 42 while Figure 9(b) shows the relativeerror (RE) for all models

Tables 1 and 2 show the elapsed time (ET) which is thetime to find the best configuration MAPE and MAE forANN-PSOCoG ANN-SPSO SPSPCoG SPSO and ANNmodels using the historical data of SampP 500 and DJIAdatasets respectively

As shown in Tables 1 and 2 the proposed ANN-PSOCoGperformance is the best with the lowest values of MAPE andMAE compared to other models for both the SampP 500 andDJIA datasets By contrast the ANN model displays thehighest values of MAPE and MAE amperefore the proposedmodelrsquos accuracy is the best against other models and ANN-PSOCoG is more efficient than other models in terms of ETfor both the SampP 500 and DJIA datasets

To show the quality of the considering methods andtheir convergence to the best solution MSE is used as ametric through the training phase Figure 10(a) shows MSEfor the compared models using SampP 500 dataset whileFigure 10(b) shows MSE for the compared models usingDJIA dataset

It is noted fromFigure 10 that the suggestedANN-PSOCoGconverged more rapidly than the remaining models andachieved the best minimum value of MSE for both the SampP 500and DJIA datasetsampis means that the prediction quality of theproposed model is the highest compared to other models

0 500 1000 1500 2000 2500 3000 3500 4000Days

500

1500

2500

3500

4500

Clos

ing

pric

e

ACPECP-NN-PSO

(a)

0 500 1000 1500 2000 2500 3000 3500 4000Days

02

01

015

005

0

ndash005

RE

(b)

Figure 5 ANN-SPSO model performance (a) ACP versus ECP-ANN-SPSO (b) RE for ANN-SPSO model

10 Scientific Programming

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 11: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

537 Study of Efficient ANN-PSOCoG under COVID-19ampe rapid spread of coronavirus (COVID-19) has severeimpact on the global stock markets It has created an un-matched level of risk resulting in investors suffering con-siderable losses in a very short time For example the drop ofthe SampP 500 index in the USA during the COVID-19 crisiswhere it lost one-third of its value during only one month[62] Figure 11 shows the comparison between the drop ofthe SampP 500 index during the dotcom crisis (which peaked

on March 24 2000) the subprime crisis (peaked on Oct 92007) and the COVID-19 crisis (peaked on Feb 19 2020)[65] As it can be seen from the figure in March 2020 it tookonly onemonth for the SampP 500 to lose one-third of its valuewhile it took one year for the subprime crisis to decline thesame amount and one year and a half for the dotcom bust

Till September 2 2020 there have been about 256million positive cases of coronavirus globally including852758 deaths ampe pandemic has affected more than 210

1000 1500 2000Target

Out

put ~

= 1

lowast ta

rget

+ 0

21

Training R = 099995

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(a)

Validation R = 099992

1000 1500 2000Target

2500 3000

Out

put ~

= 1

lowast ta

rget

+ 0

74

1000

1500

2000

2500

3000

DataFitY = T

(b)

Out

put ~

= 1

lowast ta

rget

+ 0

77

Test R = 099994

1000 1500 2000Target

2500 3000

1000

1500

2000

2500

3000

DataFitY = T

(c)

All R = 099994

1000 1500 2000Target

2500 3000

Out

put ~

= 1lowast

Tar

get +

01

5

1000

1500

2000

2500

3000

DataFitY = T

(d)

Figure 6 ampe regression values for ANN-PSOCoG

500

3500

Clos

ing

pric

e

500 1000 1500 2000 2500 3000 3500 40000Days

ACPECP-ANN-PSOCoG

10001500200025003000

(a)

1000 2000 3000 40000Days

ndash015

01

RE

ndash01

ndash005

0

005

(b)

Figure 7 ANN-PSOCoG model performance (a) ACP versus ECP-ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 11

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 12: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

countries [66] Based on the above finding an accurateprediction model for the stock market is very important Forthat the proposed ANN-PSOCoG model is presented andtested using historical data from December 31 2019 toAugust 14 2020 including the period of spread the COVID-19 of different stock market indices such as SampP 500 GoldNASDAQ-100 and CANUSD [67]

Figure 12(a) displays the result of applying the proposedANN-PSOCoG over historical data of SampP 500 Figure 12(b)

shows the relative error for ACP and ECP for ANN-PSOCoGover historical data of SampP 500

Figure 13(a) shows ACP and ECP for ANN-PSOCoGover historical data of Gold while Figure 13(b) shows RE forANN-PSOCoG over historical data of Gold

ACP and ECP for tested ANN-PSOCoG using historicaldata of NASDAQ-100 are shown in Figure 14(a) while RE ofANN-PSOCoG using data of NASDAQ-100 is shown inFigure 14(b)

ACPECP-NN-PSOECP-NN

ECP-proposed modelECP-PSOECP-SPSOCoG

0

1000

2000

3000

4000

5000

6000

Clos

ing

pric

e

1000 2000 3000 40000Days

Figure 8 ampe comparison between the proposed model and other models

ACP

ECP-ANN-PSOCoGECP-SPSO ECP-ANN-SPSO

ECP-ANN

ECP-SPSOCoG

0

10

20

30

40

50

Clos

ing

pric

e

2 4 6 8 100Days times103

times103

(a)

ANN

SPSOANN-SPSO SPSOCoG

ANN-PSOCoG

0

02

04

06

RE

2 4 6 8 10times103

0Days

(b)

Figure 9 Performance of all models using DJIA dataset (a) ampe comparison between the proposed model and other models (b) RE for allmodels using DJIA dataset

Table 1 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using SampP 500 dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000024 00027 00038 00053 00070MAE 000017 0047 0067 0070 0087Elapsed time (sec) 51082822 89128091 9521345 112375813 135987016

Table 2 MAPE MAE and ET for ANN_PSOCoG ANN-SPSO SPSOCOG SPSO and ANN models using DJIA dataset

Metrics ANN-PSOCoG ANN-SPSO SPSO COG SPSO ANNMAPE 000072 0030 0042 0062 0071MAE 000055 00024 00033 00049 00056Elapsed time (sec) 834259 973215 1056284 1389368 1537569

12 Scientific Programming

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 13: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

105

100

95

90

85

80

75

70SampP

500

Inde

x (=

100

at p

eak)

()

65

60

ndash34

ndash30

ndash15 0 15 30 45 60 75 90 105

120

135

150

165

Days from the peak

180

195

210

225

240

255

270

285

300

315

330

345

360

Dot-com crisisSubprime crisisCOVID-19 crisis

Figure 11 Effect of the COVID-19 on stock market in USA

ANN-PSOCoG

ANNANN-SPSO SPSO

SPSPCoG

01

1

10

100

1000

MSE

50 100 150 2000Epoch

(a)

ANN-PSOCoG

ANN-SPSOSPSO SPSOCoG

ANN

0

50

100

150

MSE

600Epoch

20 40

(b)

Figure 10 MSE for all models (a) MSE for all models using SampP 500 dataset (b) MSE for all models using DJIA dataset

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

2200240026002800300032003400

Clos

ing

pric

e

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash15E ndash 07

ndash1E ndash 07

ndash5E ndash 08

0

5E ndash 08

RE

(b)

Figure 12 Performance of ANN-PSOCoG over historical data of SampP 500 (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

Scientific Programming 13

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 14: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

ampe last test was training the proposed model using thehistorical data of CANUSD which represents the ratio be-tween the Canadian dollar and the American dollar ampeactual rate and expected rate are shown in Figure 15(a)while the predictionrsquos relative error is shown in Figure 15(b)

MAE and MAPE for the proposed ANN-PSOCoG usingthe historical data of SampP 500 Gold NASDAQ-100 andCANUSD are shown in Table 3

As seen from Table 3 and from Figures 11ndash14 the resultsof the proposed ANN-PSOCoG showed a high accuracy of

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

1400

2200Cl

osin

g pr

ice

1600

1800

2000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash002

0

006

RE 002

004

(b)

Figure 13 Performance of ANN-PSOCoG over historical data of Gold (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

6000

12000

Clos

ing

pric

e

700080009000

1000011000

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash001

0

002RE

001

(b)

Figure 14 Performance of ANN-PSOCoG over historical data of NASDAQ-100 (a) ACP versus ECP using ANN-PSOCoG (b) RE forANN-PSOCoG

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

DaysACPECP

068

078

Clos

ing

pric

e

07

072

074

076

(a)

1-D

ec-1

9

31-D

ec-1

9

30-J

an-2

0

29-F

eb-2

0

30-M

ar-2

0

29-A

pr-2

0

29-M

ay-2

0

28-J

un-2

0

28-J

ul-2

0

27-A

ug-2

0

Days

ndash0004

004

RE

ndash0002

0

0002

(b)

Figure 15 Performance of over historical data of CANUSD (a) ACP versus ECP using ANN-PSOCoG (b) RE for ANN-PSOCoG

14 Scientific Programming

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 15: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

prediction for the closing price using different historical dataof stock market indices during the peak spread of COVID-19ampese results assure the ability of the proposed ANN-PSO-CoG to predict the closing price with very high accuracy andconfirm the efficiency of the proposed model under the effectof coronavirus pandemic

6 Conclusion and Future Work

In this paper a new modification of particle swarm opti-mization using the ldquocenter of gravity (CoG)rdquo concept(PSOCoG) is proposed ampis modification gives a new ef-ficient search technique It benefits from the physicalprinciple ldquocenter of gravityrdquo to move the particles to the newbest-predicted position ampe newly proposed technique isused as a decision-making model integrated with ANN as alearning model to form a hybrid model called ANN-PSO-CoG ampe decision-making model will help the ANN toselect the best hyperparameters values To evaluate the ef-fectiveness of the ANN-PSOCoG model the proposed hy-brid model was tested using historical data of two differentdatasets they are SampP 500 and DJIA ampe results show thatthe proposed model was able to select the best hyper-parameters used to construct the desired network with veryhigh accuracy of prediction and with a very small error ampeproposed model displayed better performance comparedwith ANN SPSO and ANN-SPSO models in terms of theprediction accuracy Using SampP 500 dataset the results showthat the ANN-PSOCoG model outperforms ANN model byapproximately 25 SPSO model by approximately 20SPSOCoG model by approximately 17 and ANN-SPSOmodel by approximately 13 Also the results show that theANN-PSOCoG model outperforms ANN model by ap-proximately 42 SPSO model by approximately 33SPSOCoG model by approximately 24 and ANN-SPSOmodel by approximately 18 using DJIA dataset In terms ofelapsed time the results showed that the proposed model isapproximately 17 times faster than the ANN-SPSO modelapproximately 18 times faster than the SPSOCoG modelapproximately 22 times faster than the SPSO model andapproximately 26 times faster than the ANN model usingSampP 500 dataset While using the DJIA dataset the resultsshowed that the proposed model is approximately 12 timesfaster than the ANN-SPSO model approximately 13 timesfaster than the SPSOCoG model approximately 17 timesfaster than the SPSO model and approximately 19 timesfaster than the ANN model ampe proposed ANN-PSOCoGshows a high efficiency under the effect of coronavirusdisease (COVID-19) ampe proposed model was trained usingdifferent datasets of stock markets and the results proved theefficiency and accuracy of the proposed ANN-PSOCoGmodel ampe values of MAPE MAE and RE were very smallfor SampP 500 GOLD NASDAQ-100 and CANUSD datasets

ampe proposed model might need more training time andit requires modern computational resources when it is usedwith a very huge dataset oil reservoir dataset for instanceAlso when the particle dimensions increase the complexitywill increase

As future work some points can be discussed to improvethe proposed model as follows

(i) We study the effect of changing NNPHL for eachlayer instead of being the same in all hidden layersin the proposed model

(ii) We extend the proposed ANN-PSOCoG to discussthe ability to select more hyperparameters such asthe number of iterations and the size of a batch

(iii) We discuss the effect of the activation function ifwe apply the same activation function for all thelayers or apply different activation functions foreach layer

(iv) We study the impact of using more than one swarmfor PSOCoG can be considered

(v) A comparison of the proposed model with othertechniques such as ANN-GA or ANN-ACO can becarried out

(vi) ampe proposed algorithm (PSOCoG) in combina-tion with Elman neural network (ENN-PSOCoG)could be used to predict the closing price andcompare it with ANN-PSOCoG using a hugedataset

(vii) We discuss more metrics such as root-mean-squareerror (RMSE) and root-mean-square deviation(RMSD)

Data Availability

ampe data used in this study are available on Yahoo Financewebsite and can be accessed using the following link httpsfinanceyahoocom

Conflicts of Interest

ampe authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Razan Jamous Mohamed El-Darieby and Hosam ALRahhalcarried out conceptualization and methodology RazanJamous and HosamALRahhal were responsible for softwareformal analysis data curation writing and original draftpreparation Mohamed El-Darieby performed reviewing andediting and supervised and was responsible for fundingacquisition All authors have read and agreed to the pub-lished version of the manuscript

Table 3 MAPE and MAE for SampP 500 Gold NASDAQ-100 and CANUSD datasets

Metrics SampP 500 Gold NASDAQ-100 CANUSDMAPE 000158 0007799 000239 0001719891MAE 0030529 0079009 0224702 119491E-05

Scientific Programming 15

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 16: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

References

[1] World Health Organization ldquoldquoWHO characterizes COVID-19 as a pandemicrdquo 11 Marchrdquo 2020b httpswwwwhointdocsdefault-sourcecoronavirusetranscriptswho-audio-emergencies-coronavirus-press-conference-full-and-final-11mar2020pdfsfvrsncb432bb3_2

[2] K Andersen ldquoldquoMorningstarrsquos view the impact of coronaviruson the economyrdquo stock strategist industry reportsrdquo 2020httpswwwmorningstarcomarticles971254morningstars-view-the-impact-of-coronavirus-on-the-economy

[3] W K R Fernando ldquoampe economic impact of covid-19rdquoEconomics in the Time of COVID-19 vol 45 p 45 2020

[4] Q He J Liu S Wang and J Yu ldquoampe impact of COVID-19on stock marketsrdquo Economic and Political Studies vol 8no 3 pp 275ndash288 2020

[5] A Saini and A Sharma ldquoPredicting the unpredictable anapplication of machine learning algorithms in Indian stockmarketrdquo Annals of Data Science vol 6 pp 1ndash9 2019

[6] A ampakkar and K Chaudhari ldquoFusion in stock marketprediction a decade survey on the necessity recent devel-opments and potential future directionsrdquo Information Fu-sion vol 65 2021

[7] B Liu Q Wu and Q Cao ldquoAn improved Elman network forstock price prediction servicerdquo Security and CommunicationNetworks vol 2020 Article ID 8824430 9 pages 2020

[8] Y Chen K Liu Y Xie andM Hu ldquoFinancial trading strategysystem based on machine learningrdquo Mathematical Problemsin Engineering vol 2020 Article ID 3589198 13 pages 2020

[9] J Shen and M O Shafiq ldquoShort-term stock market pricetrend prediction using a comprehensive deep learning sys-temrdquo Journal of Big Data vol 7 p 66 2020

[10] Y Baydilli and U Atila ldquoUnderstanding effects of hyper-parameters on learning a comparative analysisrdquo in Inter-national Conference on Advanced Technologies ComputerEngineering and Science Ankara Turkey 2017

[11] P Probst A Boulesteix and B B Tunability ldquoImportance ofhyperparameters of machine learning algorithmsrdquo Journal ofMachine Learning Research vol 20 no 53 pp 1ndash32 2019

[12] Y Bengio I J Goodfellow and A Courville Deep LearningMIT Press Cambridge UK 2015

[13] S Nitish H Geoffrey K Alex S Ilya and S R Dropout ldquoAsimple way to prevent neural networks from overfittingrdquoJournal of Machine Learning Research vol 15 no 1pp 1929ndash1958 2014

[14] S Hayou A Doucet and J Rousseau ldquoOn the impact of theactivation function on deep neural networks trainingrdquo 2019httparxivorgabs190206853

[15] L Smith ldquoA disciplined approach to neural network hyper-parameters part 1 ndash learning rate batch size momentum andweight decayrdquo CoRR vol 2 2018 httparxivorgabs180309820

[16] L Zajmi H A Falah and A J Adam ldquoConcepts methodsand performances of particle swarm optimization back-propagation and neural networksrdquo Applied ComputationalIntelligence and Soft Computing vol 2018 7 pages Article ID9547212 2018

[17] R Hecht-Nielsen ldquoampeory of the backpropagation neuralnetworkrdquo Neural Networks vol 1 pp 445ndash448 1988

[18] I BU Z Baharudin R MQ and P Nallagownden ldquoOpti-mization of neural network architecture using genetic algo-rithm for load forecastingrdquo in Proceedings of the 2014 5thInternational Conference on Intelligent and Advanced Systems(ICIAS) pp 1ndash6 Kuala Lumpur Malaysia 2014

[19] Y Ghanou and G Bencheikh ldquoArchitecture optimization andtraining for the multilayer perceptron using ant systemrdquoInternational Journal of Computer Science vol 43 no 1 p 102016

[20] J Ananthi and V Ranganathan ldquoMultilayer perceptronweight optimization using bee swarm algorithm for mobilitypredictionrdquo IIOAB Journal vol 7 no 9 pp 47ndash63 2016

[21] M AS and S YSK ldquoWeb service classification using multi-Layer perceptron optimized with Tabu searchrdquo in AdvanceComputing Conference (IACC) pp 290ndash294 IEEE Interna-tional New York NY USA 2015

[22] H Moayedi D Tien Bui M Gor B Pradhan and A Jaafarildquoampe feasibility of three prediction techniques of the artificialneural network adaptive neuro-fuzzy inference system andhybrid particle swarm optimization for assessing the safetyfactor of cohesive slopesrdquo ISPRS International Journal of Geo-Information vol 8 no 9 p 391 2019

[23] C C Aggarwal ldquoAn introduction to neural networksrdquo inNeural Networks and Deep Learning pp 1ndash52 SpringerCham Switzerland 2018

[24] R C Eberhart and J Kennedy ldquoA new optimizer usingparticle swarm theoryrdquo in Proceedings of the Sixth Interna-tional Symposium on Micro Machine and Human Sciencepp 39ndash43 IEEE Piscataway NJ USA 1995

[25] S Z Zhao J J Liang P N Suganthan and M F TasgetirenldquoDynamic multi-swarm particle swarm optimizer with localsearch for large scale global optimizationrdquo in 2008 IEEECongress on Evolutionary Computation (IEEE World Congresson Computational Intelligence) pp 3845ndash3852 Hong KongChina 2008

[26] C Hargreaves and Y Hao ldquoPrediction of stock performanceusing analytical techniquesrdquo Journal of Emerging Technologiesin Web Intelligence vol 5 no 2 pp 136ndash142 2013

[27] B G Malkiel A Random Walk Down Wall Street WW Norton amp Company New York NY USA 1999

[28] T Helstrom and K Holmstrom ldquoPredicting the stock mar-ketrdquo Technical Report IMa-TOM-1997-07 MalardalenUniversity Sweden 1997

[29] R W ColbyCe Encyclopedia of Technical Market IndicatorsMcGraw-Hill Press New York NY USA 2003

[30] Historical data for datasets httpsfinanceyahoocomworld-indices

[31] R Arevalo J Garcıa F Guijarro and A Peris ldquoA dynamictrading rule based on filtered flag pattern recognition for stockmarket price forecastingrdquo Expert Systems with Applicationsvol 81 pp 177ndash192 2017

[32] R Cervello-Royo F Guijarro and K Michniuk ldquoStockmarket trading rule based on pattern recognition and tech-nical analysis forecasting the DJIA index with intraday datardquoExpert Systems with Applications vol 42 no 14 pp 5963ndash5975 2015

[33] T-l Chen and F-y Chen ldquoAn intelligent pattern recognitionmodel for supporting investment decisions in stock marketrdquoInformation Sciences vol 346-347 pp 261ndash274 2016

[34] M-C Lee ldquoUsing support vector machine with a hybridfeature selectionmethod to the stock trend predictionrdquo ExpertSystems with Applications vol 36 no 8 pp 10896ndash109042009

[35] R Cervello-Royo and F Guijarro ldquoForecasting stock markettrend a comparison of machine learning algorithmsrdquo Fi-nance Markets and Valuation vol 6 no 1 pp 37ndash49 2020

[36] M Nabipour P Nayyeri H Jabani and A Mosavi ldquoDeeplearning for stock market predictionrdquo httparxivorgabsarXiv200401497

16 Scientific Programming

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17

Page 17: ANewANN-ParticleSwarmOptimizationwithCenterofGravity (ANN … · 2021. 4. 30. · 2.2.StandardParticleSwarmOptimization(SPSO).SPSO waspresentedbyKennedyandEberhart[24].SPSOmimics

[37] H White Economic Prediction Using Neural Networks CeCase of IBM Daily Stock Returns Department of EconomicsUniversity of California Los Angeles CA USA 1988

[38] N I Indera I M Yassin A Zabidi and Z I Rizman ldquoNon-linear autoregressive with exogeneous input (narx) bitcoinprice prediction model using PSO-optimized parameters andmoving average technical indicatorsrdquo Journal of Fundamentaland Applied Sciences vol 9 no 3S p 791 2018

[39] Y H Chen B Yang and A Abraham ldquoFlexible neural treesensemble for stock index modelingrdquoNeurocomputing vol 70no 4-6 pp 697ndash703 2007

[40] R Arjun ldquoStock market index prediction using multilayerperceptron and long short-term memory networks a casestudy on BSE sensexrdquo International Journal of Research andScientific Innovation (IJRSI) vol 5 2018

[41] T Aboueldahab and M Fakhreldin ldquoStock market indicesprediction via hybrid sigmoid diagonal recurrent neuralnetworks and enhanced particle swarm optimizationrdquo In-ternational Congress for Global Science and TechnologyICGST vol 10 pp 23ndash30 2010

[42] S R Das D Mishra and M Rout ldquoStock market predictionusing firefly algorithm with evolutionary framework opti-mized feature reduction for OSELM methodrdquo Expert Systemswith Applications X vol 4 Article ID 100016 2019

[43] Y Hao and Q Gao ldquoPredicting the trend of stock marketindex using the hybrid neural network based on multiple timescale feature learningrdquo Applied Sciences vol 10 no 11p 3961 2020

[44] M Qiu and Y Song ldquoPredicting the direction of stock marketindex movement using an optimized artificial neural networkmodelrdquo PLoS One vol 11 no 5 Article ID e0155133 2016

[45] W-C Chiang D Enke T Wu and R Wang ldquoAn adaptivestock index trading decision support systemrdquo Expert Systemswith Applications vol 59 pp 195ndash207 2016

[46] R A Jamous E El Seidy A A amparwat and B I BayoumldquoModifications of particle swarm optimization techniques andits application on stock market a surveyrdquo InternationalJournal of Advanced Computer Science and Applications(IJACSA) vol 6 no 3 2015

[47] C Worasucheep S Nuannimnoi R Khamvichit andP Attagonwantana ldquoAn automatic stock trading system usingparticle swarm optimizationrdquo in Proceedings of the 2017 14thInternational Conference on Electrical EngineeringElectronicsComputer Telecommunications and Information Technology(ECTI-CON) pp 497ndash500 Phuket ampailand 2017

[48] R Ma9jhi G Panda G Sahoo A Panda and A ChoubeyldquoPrediction of SampP 500 and DJIA stock indices using particleswarm optimization techniquerdquo in Proceedings of the 2008IEEE Congress on Evolutionary Computation (IEEE WorldCongress on Computational Intelligence) pp 1276ndash1282Hong Kong China 2008

[49] M Isiet and M Gadala ldquoSensitivity analysis of control pa-rameters in particle swarm optimizationrdquo Journal of Com-putational Science vol 41 Article ID 101086 2020

[50] R Jamous M El-Darieby and H ALRahhal ldquoPSO-basedartificial neural network configuration selection techniquerdquoApplied Artificial Intelligence Journal 2020

[51] Y Liu Z Qin Z Shi and J Lu ldquoCenter particle swarm opti-mizationrdquo Neurocomputing vol 70 no 4-6 pp 672ndash679 2007

[52] Historical data of Standardrsquos and Poorrsquos 500 (SampP 500)httpsfinanceyahoocomquote5EGSPChistoryp5EGSPC

[53] Historical data for DJIA stock market indices httpsfinanceyahoocomquote5EDJIhistoryperiod147580

4800ampperiod21606953600ampinterval1dampfilterhistoryampfrequency1dampincludeAdjustedClosetrue

[54] Standard amp Poorrsquos ldquoStandard amp Poorrsquos research insight northAmerica data guiderdquo 2004 http140137101738008rimanualriumhtm

[55] SampP Dow Jones Indices Dow Jones industrial average[56] J Xin G Chen and Y Hai ldquoA particle swarm optimizer with

multi-stage linearly-decreasing inertia weightrdquo in Proceedingsof the 2009 International Joint Conference on ComputationalSciences and Optimization pp 505ndash508 Sanya Hainan 2009

[57] S Sengupta S Basak and R A Peters ldquoParticle swarmoptimization a survey of historical and recent developmentswith hybridization perspectivesrdquo International Journal ofComputer Ceory and Engineering vol 1 pp 157ndash191 2019

[58] G Panchal A Ganatra and Y Kosta ldquoBehaviour analysis ofmultilayer perceptrons with multiple hidden neurons andhidden layersrdquo International Journal of Computer Ceory andEngineering vol 3 no 2 2011

[59] A J ampomas M Petridis S D Walters S M Gheytassi andR E Morgan ldquoOn predicting the optimal number of hiddennodesrdquo in Proceedings of the 2015 International Conference onComputational Science and Computational Intelligence)pp 565ndash570 Las Vegas CA USA 2015

[60] A J ampomas S D Walters S D Walters S M GheytassiR E Morgan and M Petridis ldquoOn the optimal node ratiobetween hidden layers a probabilistic studyrdquo InternationalJournal of Machine Learning and Computing vol 6 no 5pp 241ndash247 2016

[61] F Ye ldquoParticle swarm optimization-based automatic pa-rameter selection for deep neural networks and its applica-tions in large-scale and high-dimensional datardquo PLoS Onevol 12 no 12 Article ID e0188746 2017

[62] J J More ldquoampe Levenberg-Marquardt algorithm imple-mentation and theoryrdquo in Lecture Notes in MathematicsG A Watson Ed vol 360 pp 105ndash116 Springer-VerlagBerlin Germany 1978

[63] A Botchkarev ldquoA new typology design of performancemetrics to measure errors in machine learning regressionalgorithmsrdquo Interdisciplinary Journal of InformationKnowledge and Management vol 14 pp 045ndash076 2019

[64] M Stang C Meier V Rau and E Sax ldquoAn evolutionaryapproach to hyper-parameter optimization of neural net-worksrdquo in Human Interaction and Emerging TechnologiesIHIET 2019 T Ahram R Taiar S Colson and A ChoplinEds vol 1018 Springer Cham Switzerland 2020

[65] ampe drop of the SampP 500 index in USA during the COVID-19crisis report httpsvoxeuorgarticlestock-market-and-economy-insights-covid-19-crisis

[66] WHO coronavirus disease (COVID-19) dashboard httpscovid19whoint

[67] Historical data for stock market indices httpsfinanceyahoocomlookup

Scientific Programming 17