[ieee 2013 12th international conference on machine learning and applications (icmla) - miami, fl,...

6
Evolving Dynamic Forecasting Model for Foreign Currency Exchange Rates using Plastic Neural Networks Gul Muhammad Khan, Durre Nayab, S. Ali Mahmud, Haseeb Zafar Centre of Intelligent Systems and Networks Research, Electrical Engineering Department, University of Engineering and Technology Peshawar, Pakistan. gk502, nayaab_khan, sahibzada.mahmud, [email protected] AbstractThis work explores developmental plasticity in neural networks for forecasting the trends in the daily foreign currency exchange rates. With this work we achieved an efficient artificial neural network (ANN) based dynamic prediction model that make use of the trends in the historical daily prices of the foreign currency to predict the future daily rates while modifying its structure with the trends. The plasticity in ANN is explored to achieve a prediction model that is computationally robust and efficient. The system performance analysis prove that the prediction model proposed is efficient, computationally cost effective and unique in terms of its least dependency on the amount of previous data required for the future prediction. The prediction model achieved accuracy as high as 98.852 percent, in predicting a single day's data from ten days data history, over a span of 1000 days (3 years). Further exploration demonstrated that when the problem domain for the network was changed to predict daily currency prices for multiple chunks of days a much better accuracy was achieved. This performance proved the robustness of the model proposed in this work for a modified problem domain. Index Terms— Plastic Neural Networks (ANNs), Developmental Plasticity, Prediction Model, Neuro Evolution, Cartesian Genetic Programming. I. INTRODUCTION The plastic neural networks are getting more and more popular due to their innate ability to learn on the fly to the changing task environment. These dynamic ANNs generate new neural sub structures when they are exposed to a changing learning scenario. The plasticity enables the ANNs to modify their network when the problem domain is changed and solve multiple linear/ non linear problems without suffering from the catastrophic interference [7][13]. The financial time series applications have become the most challenging area with the current computerization. The data of financial time series is unstable, noisy and abruptly changing [1][2][3][4]. The existing statistical models employed for time series forecasting are not efficient and flexible enough to effectively deal with the high fluctuations and uncertainty in the behavior of time series set up. The incremental network models do not take the physical constraints into account [10]. According to studies [5][6] the application of the computational evolutionary techniques to the time series applications has comparatively better results. The reason for the proficient performance of these techniques is that they can efficiently deal with the disputed, unpredictable and highly volatile data sets of the time series. This ability of the evolutionary techniques persist within a uniform problem domain, however the network loses its ability to solve the problem if the problem domain is slightly changed. This phenomenon is termed as catastrophic forgetting in which a trained network fails to solve the problem when the enigma is changed [7][13]. The computational techniques such as artificial neural networks (ANNs) are comparable to biological neural networks in their composition [7][25][20]. These ANNs evolve to the most suitable scenario with their learning process that is similar to evolution process of the biological neural networks. The functional units of these ANNs also known as neurons are analogous to biological neurons that take inputs, apply actions on them and generate the outputs [8][11]. In this work our aim is to explore a developmental technique that encodes the neural network into such architecture that has ability to adapt to learn and behave in accordance to the changing environment in real time. We have explored the Plastic Cartesian Genetic Programming based ANN (PCGPANN) to achieve a dynamic network with an ability to constantly adjust its topology and weights in response to external environmental stimuli. II. REVIEW OF RELATED WORK A. Time Series Forecasting The field of time series forecasting have been explored for several implementations and approaches towards the time series forecasting are discerned on the basis of their ability to predict the future values and the ease with which these approaches can be implemented. The traditional statistical models have comparatively improved performance when employed for the time series forecasting scenario with linear data sets but have limitations with the non linear data sets, for instance stock indices [5][6]. A survey carried out in [23] showed that up to September, 1994, 127 neural networks based business applications were published in international journals. In the next year 86 more publications were issued in neural networks based business applications. There are now over 20 commercially available neural network programs designed for the use on financial markets and there have been some notable reports of their successful application. This shows the high rate of application of ANNs to the time series forecasting. The hidden Markov model (HMM) presented for time series forecasting in [1] shows vulnerability to external factors 2013 12th International Conference on Machine Learning and Applications 978-0-7695-5144-9/13 $26.00 © 2013 IEEE DOI 10.1109/ICMLA.2013.99 15 2013 12th International Conference on Machine Learning and Applications 978-0-7695-5144-9/13 $31.00 © 2013 IEEE DOI 10.1109/ICMLA.2013.99 15 2013 12th International Conference on Machine Learning and Applications 978-0-7695-5144-9/13 $31.00 © 2013 IEEE DOI 10.1109/ICMLA.2013.99 15 2013 12th International Conference on Machine Learning and Applications 978-0-7695-5144-9/13 $31.00 © 2013 IEEE DOI 10.1109/ICMLA.2013.99 15

Upload: haseeb

Post on 01-Mar-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Evolving Dynamic Forecasting Model for Foreign Currency Exchange Rates using Plastic Neural Networks

Gul Muhammad Khan, Durre Nayab, S. Ali Mahmud, Haseeb Zafar

Centre of Intelligent Systems and Networks Research, Electrical Engineering Department,

University of Engineering and Technology Peshawar, Pakistan. gk502, nayaab_khan, sahibzada.mahmud, [email protected]

Abstract— This work explores developmental plasticity in neural networks for forecasting the trends in the daily foreign currency exchange rates. With this work we achieved an efficient artificial neural network (ANN) based dynamic prediction model that make use of the trends in the historical daily prices of the foreign currency to predict the future daily rates while modifying its structure with the trends. The plasticity in ANN is explored to achieve a prediction model that is computationally robust and efficient. The system performance analysis prove that the prediction model proposed is efficient, computationally cost effective and unique in terms of its least dependency on the amount of previous data required for the future prediction. The prediction model achieved accuracy as high as 98.852 percent, in predicting a single day's data from ten days data history, over a span of 1000 days (3 years). Further exploration demonstrated that when the problem domain for the network was changed to predict daily currency prices for multiple chunks of days a much better accuracy was achieved. This performance proved the robustness of the model proposed in this work for a modified problem domain.

Index Terms— Plastic Neural Networks (ANNs), Developmental Plasticity, Prediction Model, Neuro Evolution, Cartesian Genetic Programming.

I. INTRODUCTION The plastic neural networks are getting more and more

popular due to their innate ability to learn on the fly to the changing task environment. These dynamic ANNs generate new neural sub structures when they are exposed to a changing learning scenario. The plasticity enables the ANNs to modify their network when the problem domain is changed and solve multiple linear/ non linear problems without suffering from the catastrophic interference [7][13].

The financial time series applications have become the most challenging area with the current computerization. The data of financial time series is unstable, noisy and abruptly changing [1][2][3][4]. The existing statistical models employed for time series forecasting are not efficient and flexible enough to effectively deal with the high fluctuations and uncertainty in the behavior of time series set up. The incremental network models do not take the physical constraints into account [10]. According to studies [5][6] the application of the computational evolutionary techniques to the time series applications has comparatively better results. The reason for the proficient performance of these techniques is that they can efficiently deal with the disputed, unpredictable and highly

volatile data sets of the time series. This ability of the evolutionary techniques persist within a uniform problem domain, however the network loses its ability to solve the problem if the problem domain is slightly changed. This phenomenon is termed as catastrophic forgetting in which a trained network fails to solve the problem when the enigma is changed [7][13]. The computational techniques such as artificial neural networks (ANNs) are comparable to biological neural networks in their composition [7][25][20]. These ANNs evolve to the most suitable scenario with their learning process that is similar to evolution process of the biological neural networks. The functional units of these ANNs also known as neurons are analogous to biological neurons that take inputs, apply actions on them and generate the outputs [8][11].

In this work our aim is to explore a developmental technique that encodes the neural network into such architecture that has ability to adapt to learn and behave in accordance to the changing environment in real time. We have explored the Plastic Cartesian Genetic Programming based ANN (PCGPANN) to achieve a dynamic network with an ability to constantly adjust its topology and weights in response to external environmental stimuli.

II. REVIEW OF RELATED WORK A. Time Series Forecasting

The field of time series forecasting have been explored for several implementations and approaches towards the time series forecasting are discerned on the basis of their ability to predict the future values and the ease with which these approaches can be implemented. The traditional statistical models have comparatively improved performance when employed for the time series forecasting scenario with linear data sets but have limitations with the non linear data sets, for instance stock indices [5][6]. A survey carried out in [23] showed that up to September, 1994, 127 neural networks based business applications were published in international journals. In the next year 86 more publications were issued in neural networks based business applications. There are now over 20 commercially available neural network programs designed for the use on financial markets and there have been some notable reports of their successful application. This shows the high rate of application of ANNs to the time series forecasting.

The hidden Markov model (HMM) presented for time series forecasting in [1] shows vulnerability to external factors

2013 12th International Conference on Machine Learning and Applications

978-0-7695-5144-9/13 $26.00 © 2013 IEEE

DOI 10.1109/ICMLA.2013.99

15

2013 12th International Conference on Machine Learning and Applications

978-0-7695-5144-9/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMLA.2013.99

15

2013 12th International Conference on Machine Learning and Applications

978-0-7695-5144-9/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMLA.2013.99

15

2013 12th International Conference on Machine Learning and Applications

978-0-7695-5144-9/13 $31.00 © 2013 IEEE

DOI 10.1109/ICMLA.2013.99

15

such as stock indices making it unstable and inflexible. To fix these problems the ANN foreign exchange rate forecasting model (AFERFM) was designed, but the ultimate flexibility was not achieved. Two ANN techniques presented in [4] were employed for time series forecasting. These techniques include Multilayer Perceptron (MLP) and Volterra. The results of both the ANNs were monitored and it was observed that the efficiency had not reached above 80%. Volterra model with four layers and twelve inputs had better performance than the MLP. In [22] a multi neural ANN model was evolved with one master and three sub networks. The model forecasted TWD/USD exchange rates and their tendencies on five macro economic factors such as money supply, import & export, price level, interest rate and productivity. Seven technical indicators were used to monitor the results with fifteen and one day intervals. But the model was not flexible to deal with external factors affecting the system. In [24] MLP forecasting model was presented that predicted USD/Euro exchange rates for the three days ahead of preceding data. The model was efficient and produced accurate results but it did not perform well when the external factors continued to affect the system. The work done on time series forecasting with all of these statistical and non statistical models revealed that none of these models could perform well when the problem domain was changed slightly or a dynamic scenario presented to them. B. Developmental Plasticity

The work on plasticity began when Nolfi presented an ANN model with genotype and phenotype mapping [18]. The external environment continued to change and the network had the ability to adapt to the changing environment. Similar neural networks were proposed in [16] in which a single cell that went through the process of cell division and migration until a two-dimensional neural network was achieved from these cells. The plastic neural model presented in [19] was able to develop itself under the effect of changes in the environment and implement synaptic plasticity through dynamic creation and destruction of unused network connections. Hence the network continued to change at run time producing efficient results [19]. A developmental model based on genetic algorithm that had biologically realistic morphology was later presented in [15]. The drawback with the network was that the function in the network could not be investigated.

The behavioral robustness of the synaptic plasticity was explored in [14] by Floreano et al. Neuro-controllers that solved a light switching scenario with no reward. These controllers were investigated in four different learning scenarios. Contrary to classic ANNs the plastic ANNs were able to learn in the changing environmental. It was observed that with synaptic plasticity complex problems can be solved better than recurrent ANNs with fixed weights. In [17] the Hyper NEAT encoding scheme was used to evolve the synaptic weights and parameters of learning rules. The approach was tested on the T-maze experiments and foraging bee scenario changing the reward from time to time to examine their response with changing environmental conditions. It was observed that the network had poor

performance in terms of fitness. Later on the idea of selecting novel solutions was proposed rather than fitness alone that showed enhanced learning capability of the agents. In [21] compartmental model of a neuron was devised. The neuron was a collection of seven chromosomes encoding different computational functions representing different aspects of neuron. The model had neurons, dendrites, and axon branches that grow, change and die during the process of solving a computational problem. Cartesian genetic programming was explored in [21] to obtain the suitable computational equivalent functions of the neural networks. The learning potential of the system was evaluated for its performance to solve the famous agent based scenario of wumpus world. The results obtained in this work were promising but the end network was not computationally efficient.

In this article, the term plasticity in ANNs refers to those ANNs in particular that can change their morphology in accordance to the changing environment in real time. One of the unique aspects of our proposed approach is that unlike other ANNs PCGPANN has both synaptic and developmental plasticity. This technique was evaluated in [20] for the control systems problem of pole-balancing for both linear and non-linear conditions. The results showed that the PCGPANN outperformed static Neuro-evolutionary algorithms both in terms of speed of learning and robustness [20].

III. CARTESIAN GENETIC PROGRAMMING Julian F Miller introduced the concept of Cartesian Genetic

Programming (CGP) in 1999 [12]. CGP is a genetic programming approach that use a 2D graphical representation of function nodes. CGP represents the programs as directed acyclic formats operating in feed forward fashion. The 2D graph represented as a grid of interconnected function nodes represent the phenotypic structural space, and its genotype can be represented as an array of integers. The final phenotype representing the ultimate network is the directed graph connecting inputs and outputs along the nodal grid. Each node exhibit a node function, inputs, and outputs [13]. CGP provide a general platform for evolving the hybrid structure of any number of networks in any order. It provide a complete Cartesian architecture for their interconnectivity patterns producing immense less hybrid structures from a pole of networks. CGP is explored in a range of applications producing interesting results [13].

IV. CARTESIAN GENETIC PROGRAMMING EVOLVED ARTIFICIAL NEURAL NETWORK

The major concern and an important aspect for the evolution of neural networks is the strategy employed for its representation. There are several approaches towards the evolution of the ANNs as mentioned in section II. The evolutionary strategy employed for the ANN model presented in this work is carried out with CGP. CGP evolve the ANNs with its unique architecture that makes it computationally cost effective and efficient. CGP traverse the ANN model swiftly and generates promising results because of its network

16161616

representation and architecture. Figure 1 shows a typical CGPANN genotype/phenotype with inputs (I0, I1, I2, .... I9), outputs (O0, O1, O2, ... O9) and the network having 0, 1, 2, 4, 5, 7, 8 & 9 as active nodes and 3 & 6 as inactive nodes with w0, w1, w2……., w17 representing the weights. The number of inputs (arity) in this case is 2.

FIGURE 1: A TYPICAL CGPANN PHENOTYPE AND GENOTYPE The string of numbers underneath the nodal figure arranged

in boxes represent individual node in terms of its function, inputs and the weights associated with the inputs respectively. The box having dotted lines represents the inactive nodes.

The functional unit of the CGPANN is a neuron [7]. The network of CGPANN is composed of neurons arranged in two dimensional frameworks of rows and columns. The uniqueness of CGPANN is that it evolves not only weights and topology of neural network, but also provides a unique architecture inspired by CGP. There may be nodes (junk neurons/nodes) that do not take part in the ultimate output of the evolved network thus ending up with an optimal network that is computationally efficient.

V. PLASTICITY IN CARTESIAN GENETIC PROGRAMMING EVOLVED ARTIFICIAL NEURAL NETWORKS

Plasticity in neural networks has always shown high throughput and comparatively better results [20] [21]. The plasticity in CGPANN is achieved by using the output gene in the genotype of CGPANN not only for system output but for making the developmental decisions as well. According to a decision function applied on the outputs the development of the network is invoked. The development in the phenotype is introduced by mutation in the genotype at runtime. A generalized approach of PCGPANN is depicted in figure 2. The figure shows the inputs, outputs, connections and the part that invoke development in the network. The initial network is the original representation of the genotype that changes in response to the output of the system with the passage of time. The decision regarding the development is the reflection of the output of the system fed into the sum block as shown in the figure. If the value obtained from the function is less than defined decision value, mutation is invoked in the network or

otherwise the outputs are monitored without any modification. The uniqueness of the network is that this mutation takes place in real time that provides the plasticity feature to the network. PCGPANN invokes the mutation in the network by mutating a node function, an input, a weight or by switching an input in the real time. Figure 3 is an illustration of PCGPANN.

textIf < 0. 5

Inputs

Summation Function

Sigmoid Function

Invoke Mutation

CGPANNOutputs

W0

W1

W2

W3

W4

W5

W6

W7

W8

W9

I 0

I1

I 2

I 3

I 4

I 5

I 6

I 7

I 8

I 9

O0

O1

O2

O3

O4

O5

O6

O7

O8

O9

0

9

3

5

8

7

2

4

1

w14

w10

w11

w12

w13

w16

w15

w17

W0

W1

W2

W3

W4

W5

W6

W7

W8

W9

6

FIGURE 2: A GENERALIZED APPROACH OF PCGPANN

The mutation in the function, weight, and input are shown both with phenotype and genotype representations.

Tanh Sigmoid

ψ0

ψ1

0.2

ψ3

ψ1

Sigmoid

ψ0

ψ1

0.2

ψ3

ψ1

Sigmoid ψ3

ψ1

ψ0

Sigmoid Sigmoid

ψ0

ψ1

0.7

ψ3

ψ1

(a). Phenotype

(b). Phenotype – after Function Mutation

(c). Phenotype – after Weight Mutation

(d). Phenotype – after Input Mutation

3,11313132323230121212020202 , �fswiswifswiswi

31131313231223 , �fswiswi3,

1 131313232323 1121212020202 , �f swiswi fswiswi

3,11313132312231121212020202 , �fswiswifswiswi

Sigmoid

0.2

0.7

FIGURE 3: PCGPANN PLASTICITY DEMONSTRATION

The original network phenotype and genotype are shown in Figure 3(a). As shown in the figure the activation functions are two: sigmoid and tangent hyperbolic. Ѱ’s represent the system inputs and outputs and the weights are chosen randomly between [-1, 1]. The original network can be expressed mathematically with Eq. (1).

Ѱ3= Sig (0.6 Ѱ1+ 0.3Tanh (0.2 Ѱ0+ 0.7 Ѱ1)) (1) In Figure 3(b) the first node function Tanh is mutated and

replaced with sigmoid function. The mathematical expression for the updated genotype is given in Eq. (2).

Ѱ3= Sig (0.6 Ѱ1+ 0.3Sig (0.2 Ѱ0+ 0.7 Ѱ1)) (2)

17171717

In fig 3(c) the weight 0.3 at the first input of the second node is mutated to 0.7. The mathematical expression for the updated genotype becomes as given in Eq. (3).

Ѱ3= Sig (0.6 Ѱ1+ 0.7Sig (0.2 Ѱ0+ 0.7 Ѱ1)) (3) In Figure 3(d) the whole input is mutated. The first node is

replaced with the input Ѱ1. And the network attains the final expression as given in Eq. (4).

Ѱ3= Sig (0.6 Ѱ1+ 0.7 Ѱ0) (4)

VI. EXPERIMENTAL SETUP The currency exchange rates forecasting model proposed in

this work is trained on the currency data obtained from the Australian Reserved Bank. Daily prices of 500 days of US currency is taken for training the model. Ten networks are trained on this data for five independent seeds each. The training process begins with the initial random population of networks. The parameters such as number of inputs per node, the activation function, mutation rate, number of offspring per generation and the number of system inputs are initialized. The number of inputs per node is set to be 5, the activation function is defined to be log-sigmoid, the number of system inputs are set to 10, the mutation rate (μr) is 10% and the number of offspring per generation (λ in the 1+λ evolutionary strategy) is set to be 9. All these parameters are selected on the basis of evolutionary performance and results from the previous work done on PCGPANN [24]. The model has only one row as infinite number of graphs can be represented with one row scenario and the number of columns is equal to nodes in the network. The initially generated genotype is mutated to produce nine offspring. The best genotype among these genotypes is selected and mutated again. The process is repeated unless the best network is achieved. The network is further classified into half inputs and full inputs architectures. In half inputs architecture, the running average of half of the system inputs is fed to a decision function, whereas in full inputs architecture, full system inputs are fed back. The decision function is sigmoid function selected on the basis of system evolutionary performance.

The system performance is evaluated in terms of Mean Absolute Percentage Error (MAPE). MAPE is evaluated from the comparison of actual values of the currency with that estimated by the model. The mathematical expression for MAPE is given below:

���� = 1� � � = (1 �) �|� � − ���|

��� � × 100

Where LFi are the forecasted values, LAi are the actual values and N is the number of days. MAPE is an international standard for evaluating the performance of a system.

VII. RESULTS AND ANALYSIS The PCGPANN model proposed in this work is trained on

the 500 days daily prices of US currency. The experiments for the training phase are carried out for one million generations. The optimal trained network is then tested for its performance

on a different data set of 1000 days daily prices of ten currencies namely Taiwanese Dollars (TWD), Singapore Dollars (SGD), Indonesian Rupiah (IDR), Japanese Yen (YEN), Canadian Dollars (CAD), New Zealand Dollars (NZD), Great Britain Pounds (GBP), Euros (EUR), Malaysian Ringgits (MYR) and Swiss Franc (CHF). The data with known historical values are estimated during the testing phase and compared with the actual values to evaluate the performance of the model. Both the half and full input architectures of the networks are tested for their performance.

The experiments are carried out for both the half and full inputs network architecture with half generating better results that is why only half inputs results are presented here. Table I shows the training phase results of the PCGPANN model in terms of MAPE values for half inputs network architectures. These results are calculated from the average of five independent evolutionary runs for each network. The fittest network is achieved for the 50 nodes network during the training phase with the MAPE value of 1.579. This is an efficient performance of the model, as it achieved the optimal accuracy for least (50 out of 50-500 nodes) number of nodes. Hence the networks with simple architecture and least number of computational units are selected that make the model computationally economical.

Table II shows the testing phase results of the PCGPANN model in terms of MAPE values for half inputs network architecture. The fittest network is achieved for 350 nodes with MAPE value of 1.1484. These results are again myriad and promising both in terms of accuracy and computational requirements. The PCGPANN model use only ten preceding daily prices as input to the model besides the trends are continuously affected by international stock market. The eleventh day’s currency price is predicted with respect to Australian currency. The historical data of daily prices of currencies is highly fluctuating and does not follow any defined pattern. These fluctuations affect the performance of the model. The accuracy achieved with PCGPANN is overwhelming as compared with other ANNs taking into account these high fluctuations in the input data. Table III shows a comparison between the accuracy of PCGPANN and the accuracy of other ANNs for foreign currency exchange scenario. The best performance is achieved by the PCGPANN with 98.85% accuracy.

TABLE I TRAINING RESULTS OF HALF FEED BACK (HFB) NETWORK OF

PCGPANN No. Of Nodes MAPE (HFB)

50 1.579 100 1.623 150 1.646 200 1.645 250 1.941 300 1.629 350 1.61 400 1.749 450 1.613 500 1.625

18181818

The plasticity with CGPANN is monitored by changing the scenario for prediction. The model is examined under changing environment such as increasing the span of predictions. Table IV and Table V demonstrate the performance of PCGPANN and CGPANN models respectively for the long run predictions. The results for the prediction of multiple chunks of 10, 30, 90, 180 and 360 days are monitored. The results indicate that PCGPANN performs better than simple CGPANN in all the cases with a best

MAPE value of 2.668 for a ten days' case with a 50 nodes network. This is a myriad result proving that PCGPANN model performs better than CGPANN mode in the long run when the learning scenario is altered.

It can be observed that in all the testing results the best performance is achieved for IDR data. The reason for the uniform result is that IDR is comparatively less volatile as compared to other currencies.

TABLE II

TESTING RESULTS OF HALF FEEDBACK NETWORK OF PCGPANN FOR VARIOUS NETWORKS VERSUS CURRENCIES IN TERMS OF MAPE VALUES Currency 50 100 150 200 250 300 350 400 450 500

TWD 1.6842 2.0116 1.9575 1.9550 2.8658 1.6763 1.9003 2.1216 1.9028 1.9194 TWI 1.6162 2.098 2.012 2.008 3.129 1.559 1.944 2.193 1.947 1.965 Euro 2.1501 2.260 2.254 2.253 2.751 2.202 2.221 2.360 2.223 2.233 GBP 1.7727 2.007 1.974 1.972 2.783 1.807 1.923 2.122 1.925 1.940 MYR 1.6013 1.9887 1.9233 1.9207 2.9350 1.5645 1.8640 2.0901 1.8666 1.8829 NZD 1.7493 1.769 1.782 1.781 1.940 1.804 1.763 1.840 1.765 1.771 CAD 1.6859 1.730 1.737 1.736 1.969 1.714 1.714 1.809 1.715 1.723 HKD 1.6719 2.128 2.046 2.043 3.142 1.612 1.987 2.212 1.990 2.005 SGD 1.4847 1.8216 1.7666 1.7641 2.7165 1.4549 1.7082 1.9360 1.7107 1.7271 CHF 1.8099 2.161 2.105 2.103 3.003 1.818 2.050 2.259 2.053 2.068 Yen 1.3617 1.580 1.566 1.564 2.184 1.347 1.513 1.721 1.516 1.532 SDR 2.3430 2.7079 2.6383 2.6359 3.6016 2.2612 2.5910 2.7708 2.5927 2.6051 KRW 1.4864 1.5647 1.5678 1.5669 1.8760 1.4796 1.5418 1.6612 1.5436 1.5511 IDR 1.1573 1.3255 1.3182 1.3165 1.8566 1.1484 1.2647 1.4860 1.2675 1.2851 CNY 1.6251 1.9817 1.9233 1.9208 2.8867 1.6017 1.8653 2.0840 1.8679 1.8845 CAD 1.6859 1.730 1.737 1.736 1.969 1.714 1.714 1.809 1.715 1.723 Std 0.27994003 0.330125 0.3065388 0.3061746 0.5753309 0.2887492 0.304067 0.3127109 0.3039505 0.303763

TABLE III COMPARISON OF PCGPANN WITH OTHER ANNS

Network Accuracy AFERFM [1] 81.2 HFERFM [1] 69.9

Multi Layer Perceptron [4] 72 Volterra Network [4] 76

Back Propagation Network [22] 62.27 Multi Neural Network [22] 66.82

CGPANN 98.85 PCGPANN (Proposed) 98.8516

TABLE IV PERFORMANCE OF PCGPANN (HFB) FOR MULTIPLE CHUNKS OF DAYS

Currency Number of Days

std 10 30 90 180 360

KRW 3.616 6.254 11.455 24.273 36.342 13.697569 SDR 4.987 9.080 17.215 29.382 51.989 18.918943 Euro 5.394 9.179 17.509 28.690 41.797 14.882283 GBP 4.564 7.344 13.734 26.034 51.627 19.184409 CHF 5.046 9.310 18.087 28.550 49.898 17.922339

HKD 4.499 8.272 15.902 28.199 48.230 17.702751 CAD 3.925 6.487 11.785 19.235 32.862 11.64407 SGD 3.818 7.131 13.125 22.963 47.600 17.603432 MYR 4.143 7.443 14.086 26.093 48.440 17.958851 TWD 4.196 7.326 12.970 24.494 45.212 16.650238 TWI 4.492 8.800 15.853 28.889 50.816 18.68204

IDR 2.668 5.296 10.780 17.792 33.198 12.221832 CNY 4.124 7.327 13.697 26.056 47.887 17.794312 NZD 4.478 7.456 11.814 20.556 18.933 7.0075761 Yen 3.762 7.285 13.876 22.473 34.511 12.389957

TABLE V PERFORMANCE OF CGPANN FOR MULTIPLE CHUNKS OF DAYS

Currency Number of Days

Std 10 30 90 180 360

KRW 3.623 6.277 11.564 24.470 36.484 13.76208 TWI 4.499 8.829 15.969 29.173 50.957 18.74839 Euro 5.399 9.202 17.615 28.842 41.837 14.90644 GBP 4.570 7.371 13.805 26.154 51.669 19.19775 CHF 5.052 9.334 18.177 28.820 49.946 17.95406 SGD 3.825 7.156 13.239 23.321 47.847 17.70973 CAD 3.929 6.513 11.884 19.270 32.989 11.68419 HKD 4.507 8.299 16.012 28.479 48.416 17.78813 SDR 4.993 9.104 17.319 29.654 52.089 18.96950 MYR 4.150 7.471 14.197 26.396 48.626 18.04241 TWD 4.202 7.351 13.095 24.816 45.410 16.73952 NZD 4.482 7.479 11.885 20.652 18.966 7.034809 IDR 2.675 5.325 10.900 17.917 33.280 12.24902 CNY 4.131 7.353 13.809 26.378 48.073 17.88039 Yen 3.770 7.322 13.954 22.680 34.552 12.41816

VIII. CONCLUSION AND FUTURE WORK

We have explored the developmental plasticity in ANNs for the implementation of a forecasting model for foreign currency exchange rates. We have used a technique known as CGP to encode and evolve the computational networks for the proposed system. These networks are inspired by the natural biological neural system that has ability to evolve and modify in accordance to the changing environmental conditions. The results illustrated that the system is robust and have capability to learn with in a changing task environment. The evolved networks indicate that only few and recent inputs are required

19191919

for the prediction of the future values. This ability simplifies the model and makes it efficient as a small number of recent inputs achieve the prediction. More work in this area can be carried out including signature identification and verification, medical diagnosis, voice recognition, character recognition, image recognition, face recognition, data mining, loan risk evaluation, control systems, music composition, system identification, associative memories, system energy requirements, economic indicators, medical outcomes, crop forecasts and environmental risk forecast. A robust and efficient model can be produced with PCGPANN that produces outputs accurately and swiftly utilizing least possible inputs in any dynamic environment.

IX. REFERENCES [1] A. A. Philip, A. A. Tofiki & A. A. Bidemi “Artificial Neural

Network Model for Forecasting Foreign Exchange Rate,” World of Computer Science and Information Technology Journal, vol. 1, no. 3, pp. 110-118, 2011.

[2] J. H.Gould, “Forex Prediction Using An Artificial Intelligent System,” Thesis, Oklahoma State University, 2004.

[3] G. Zhang & M. Y. Hu “Neural Network Forecasting of the British Pound/ US Dollar Exchange Rate," Omega,Int. J. Mgmt Sci. Vol. 26, No. 4, pp.495-506, 1998.

[4] O. V. Kryuchin, A. A. Arzamastsev, and K. G. Troitzsch, “The prediction of currency exchange rates using artificial neural networks,” Exchange Organizational Behavior Teaching Journal, no. 4, 2011.

[5] A. N. Refenes, M. Azema-Barac, L. Chen, and S. A. Karoussos, “Currency Exchange Rate Prediction and Neural Network Design Strategies,” Neural Computing & Applications, vol. 1, no. 1, pp. 46-58, Mar. 1993.

[6] C. Kadilar and H. Alada, “Forecasting the Exchange Rate Series with ANN : The case of Turkey,” Economics and Statistics Changes, pp. 17-29, 2009.

[7] M. M. Khan, G. M. Khan, and J. F. Miller, “Evolution of Optimal ANNs for Non-Linear Control Problems using Cartesian Genetic Programming”, International Conference on Artificial Intelligence, ICAI, pp. 339-346, 2010.

[8] J. Kamruzzaman and R.A. Sarker, “Forecasting of Currency Exchange Rates Using ANN : A Case Study,” Int. Conf. Neural Networks & Signal processing, pp. 793-797, IEEE, 2003.

[9] M. Bidlo, “Evolutionary Design of Generic Combinational Multipliers Using Development,” Evolvable System: From Biology to Hardware, pp. 77-88, SpringerBerlin Heidelberg, 2007.

[10] A. Haider and M. N. Hanif, “Inflation Forecasting in Pakistan using Artificial Neural Networks,” Pakistan Economic and Social Review, vol. 47, no. 1, pp. 123-138, 2009.

[11] C. Igel, “Neuroevolution for Reinforcement Learning Using Evolution Strategies,” The Congress on Evolutionary Computation 2003, CEC’03, vol. 4, pp. 2588-2595, IEEE, 2003.

[12] Rothermich, A. Joseph, and Julian F. Miller, "Studying the emergence of multicellularity with cartesian genetic programming in artificial life," Proceedings of the 2002 UK Workshop on Computational Intelligence, pp.397-403, GECCO, 2002.

[13] J. F. Miller, “Cartesian Genetic Programming,” Genetic Programming, Natural Computing Series, SpringerBerlin Heidelberg, 2011.

[14] D. Floreano and J. Urzelai, “Evolutionary robots with on-line self-organization and behavioral fitness,” Neural Networks 13, no. 4, pp. 431–443, 2000.

[15] A. G. Rust, R. Adams and H. Bolouri, “Evolutionary neural topiary: Growing and sculpting artificial neurons to order,” In Proc. of the 7th Int. Conf. on the Simulation and synthesis of Living Systems (ALife VII), pp. 146–150, The MIT Press, 2000.

[16] A. Cangelosi, S. Nolfi and D. Parisi, “Cell division and migration in a ’genotype’ for neural networks,” Network-Computation in Neural Systems, no. 5, pp. 497–515, 1994.

[17] K. Stanley and R. Miikkulainen, “Competitive coevolution through evolutionary complexification,” Journal of Artificial Intelligence Research JIAR 21, pp.63–100, 2004.

[18] S. Nolfi, O. Miglino and D. Parisi, “Phenotypic plasticity in evolving neural networks,” In Proceedings of the International Conference from Perception to Action, pp. 146-157, IEEE Press, 1994.

[19] A. Upegui, A. Perez-Uribe, Y. Thoma, and E. Sanchez, “Neural Development on the Ubichip by Means of Dynamic Routing Mechanisms,” Springer-Verlag Berlin Heidelberg, pp. 392–401, 2008.

[20] M. M. Khan, G. M. Khan, and J. F. Miller, “Developmental Plasticity in Cartesian Genetic Programming Artificial Neural Networks,”, ICINCO 2011.

[21] G. M. Khan, J. F. Miller, and D. M. Halliday. "A developmental model of neural computation using cartesian genetic programming." Proceedings of the 2007 GECCO conference companion on Genetic and evolutionary computation. pp. 2535-2542. ACM, 2007.

[22] A. P. Chen, Y. C. Hsu and K. F. Hu “A Hybrid Forecasting Model for Foreign Exchange Rate Based on a Multi-neural Network,” Fourth International Conference on Natural Computation, ICNC, pp. Vol. 5, pp. 293-298, 2008.

[23] E.M. Azoff," Neural network time series forecasting of financial markets" John Wiley & Sons, Inc, 1994.

[24] V. Pacelli, V. Bavelacqua and M. Azzollini, “An Artificial Neural Network Model to Forecast Exchange Rates,” Journal of Intelligent Learning Systems and Applications, JILSA, no. 3(2A), pp. 57-69, 2011.

[25] J. T. Jeng and T. T. Lee, "An approximate equivalence neural network to conventional neural network for the worst-case identification and control of nonlinear system", International Joint Confernece on Neural Netwroks, IJCNN, Vol. 03, pp. 2104-2108, IEEE, 1999.

20202020