# Sequential optimizing investing strategy with neural networks

Post on 21-Jun-2016

213 views

Embed Size (px)

TRANSCRIPT

wngo,53-8

an iabilest p. Won s.

vised neural network. With this algorithm the network learns itsinternal structure by updating the paramet

outputo predr encoboticsn predf neur; Hania

Vovk (2001). In the game-theoretic probability established by Sha-fer and Vovk, various theorems of probability theory, such as thestrong law of large numbers and the central limit theorem, areproved by consideration of capital processes of betting strategiesin various games such as the coin-tossing game and the boundedforecasting game. In game-theoretic probability a player Investoris regarded as playing against another player Market. In this

games and showed that the resulting strategy is easy to implement

4 we evaluate performances of these strategies by Monte Carlosimulation. In Section 5 we apply these strategies to stock pricedata from Tokyo Stock Exchange. Finally we give some concludingremarks in Section 6.

2. Sequential optimizing strategy with neural networks

Here we introduce the bounded forecasting game of Shafer andVovk (2001) in Section 2.1 and network models we use in Section2.2. In Section 2.3 we specify the investing ratio by an unsupervised

Corresponding author.

Expert Systems with Applications 38 (2011) 1299112998

Contents lists availab

w

.eE-mail address: takemura@stat.t.u-tokyo.ac.jp (A. Takemura).2007; Khoa, Sakakibara, & Nishikawa, 2006; Yoon & Swales, 1991).In these papers authors are concerned with the prediction of

time series and they do not pay much attention to actual investingstrategies, although the prediction is obviously important indesigning practical investing strategies. A forecast of tomorrowsprice does not immediately tell us how much to invest today. Incontrast to these works, in this paper we directly consider invest-ing strategies for nancial time series based on neural networkmodels and ideas from game-theoretic probability of Shafer and

The organization of this paper is as follows. In Section 2 we pro-pose sequential optimizing strategy with neural networks. In Sec-tion 3 we present some alternative strategies for the purpose ofcomparison. In Section 3.1 we consider an investing strategy usingsupervised neural network with back propagation algorithm. Thestrategy is closely related to and reects existing researches onstock price prediction with neural networks. In Section 3.2 we con-sider Markovian proportional betting strategies, which are muchsimpler than the strategies based on neural networks. In Sectionit training data containing inputs andnetwork with updated parameterstaining inputs the network has neveis applied in many elds such as roand it shows a good performance iseries. Relevant papers on the use otime series include (Freisleben, 19920957-4174/$ - see front matter 2011 Elsevier Ltd. Adoi:10.1016/j.eswa.2011.04.098er values when we givets. We can then use theict future events con-untered. The algorithmand image processing

iction of nancial timeal network to nancials, Curtis, & Thalassinos,

and shows a good performance in comparison to well-known strat-egies such as the universal portfolio (Cover, 1991) developed byThomas Cover and his collaborators. In this paper we proposesequential optimization of parameter values of investing strategiesbased on neural networks. Neural network models give a very ex-ible framework for designing investing strategies. With simulationand with some data from Tokyo Stock Exchange we show that theproposed strategy shows a good performance.ton, and Williams (1986) developed back propagation algorithmin 1986, which is the most commonly used algorithm for super-

posed sequential optimization of parameter values of a simpleinvesting strategy in multi-dimensional bounded forecastingSequential optimizing investing strategy

Ryo Adachi a,b, Akimichi Takemura a,aGraduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hob Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 1

a r t i c l e i n f o

Keywords:Back propagationForecastingGame-theoretic probabilityTime series

a b s t r a c t

In this paper we proposefrom game-theoretic probneural network with the bment in the current roundincluding a strategy baseditive with other strategies

1. Introduction

A number of researches have been conducted on prediction ofnancial time series with neural networks since Rumelhart, Hin-

Expert Systems

journal homepage: wwwll rights reserved.ith neural networks

Bunkyo-ku, Tokyo 113-8656, Japan505, Japan

nvesting strategy based on neural network models combined with ideasity of Shafer and Vovk. Our proposed strategy uses parameter values of aerformance until the previous round (trading day) for deciding the invest-e compare performance of our proposed strategy with various strategiesupervised neural network models and show that our procedure is compet-

2011 Elsevier Ltd. All rights reserved.

framework investing strategies of Investor play a prominent role.Prediction is then derived based on strong investing strategies(cf. defensive forecasting in Vovk, Takemura, & Shafer (2005)).

Recently in Kumon, Takemura, and Takeuchi (2011) we pro-

lsevier .com/locate /eswale at ScienceDirect

ith Applications

Investors capital is written as

Kn Kn11 f un1;xxn:We need to specify the number of inputs L and the number of

neurons M in the hidden layer. It is difcult to specify them in ad-vance. We compare various choices of L and M in Sections 4 and 5.Also in un1 we can include any input which is available before thestart of round n, such as moving averages of past prices, seasonalindicators or past values of other economic time series data. Wegive further discussion on the choice of un1 in Section 6.

2.3. Sequential optimizing strategy with neural networks

In this section we propose a strategy which we call sequentialoptimizing strategy with neural networks (SOSNN).

We rst calculate x xn1 that maximizes

withn n1 n n k1 k k

Taking the logarithm of Kn we have

logKn Xnk1

log1 akxk: 1

The behavior of Investors capital in (1) depends on the choice ofak. Specifying a functional form of ak is regarded as an investingstrategy. For example, setting ak to be a constant for all k iscalled the -strategy which is presented in Shafer and Vovk(2001). In this paper we consider various ways to determine akin terms of past values xk1,xk2, . . . , of x and seek better ak in try-ing to maximize the future capital Kn; n > k.

Let uk1 = (xk1, . . . ,xkL) denote past L values of x and let ak de-pend on uk1 and a parameter x: ak = f(uk1,x). Then

xk1 argmaxXk1t1

log1 f ut1;xxt

is the best parameter value until the previous round. In our sequen-tial optimizing investing strategy, we use xn1 to determine theinvestment Mn at round n:

Mn Kn1 f un1;xn1:For the function f we employ neural network models for their ex-ibility, which we describe in the next section.

2.2. Design of the network

We construct a three-layered neural network shown in Fig. 1.The input layer has L neurons and they just distribute the inputneural network and we propose sequential optimization of param-eter values of the network.

2.1. Bounded forecasting game

We present the bounded forecasting game formulated by Shaferand Vovk (Shafer & Vovk, 2001). In the bounded forecasting game,Investors capital at the end of round n is written asKn n 1;2; . . . and initial capital K0 is set to be 1. In each roundInvestor rst announces the amount of money Mn he betsjMnj 6 Kn1 and then Market announces her move xn 2 [1,1].xn represents the change of the price of a unit nancial asset inround n. The bounded forecasting game can be considered as anextension of the classical coin-tossing game since the boundedforecasting game results in the classical coin-tossing game ifxn 2 {1,1}. With Kn; Mn and xn, Investors capital after round nis written as Kn Kn1 Mnxn.

The protocol of the bounded forecasting game is written asfollows.

Protocol:K0 1.FOR n = 1,2, . . .:

Investor announces Mn 2 R.Market announces xn 2 [1,1].Kn Kn1 Mnxn

END FOR

We can rewrite Investors capital as Kn Kn1 1 anxn,where an Mn=Kn1 is the ratio of Investors investment Mn tohis capital Kn1 after round n 1. We call an the investing ratioat round n. We restrict an as 1 6 an 6 1 in order to prevent Inves-tor becoming bankrupt. Furthermore we can write Kn asK K 1 a x Pn 1 a x :

12992 R. Adachi, A. Takemura / Expert Systemsuj (j = 1, . . . ,L) to every neuron in the hidden layer. Also the hiddenlayer has M neurons and we write the input to each neurons as I2iwhich is a weighted sum of ujs.As seen from Fig. 1, I2i is obtained as

I2i XLj1

x1;2ij uj;

wherex1;2ij is called the weight representing the synaptic connectiv-ity between the jth neuron in the input layer and the ith neuron inthe hidden layer. Then the output of the ith neuron in the hiddenlayer is described as

y2i tanh I2i

:

As for activation function we employ hyperbolic tangent function.In a similar way, the input to the neuron in the output layer, whichwe write I3, is obtained as

I3 XMi1

x2;3i y2i ;

where x1i is the weight between the ith neuron in the hidden layerand the neuron in the output layer. Finally, we have

y3 tanhI3;which is the output of the network. In the following argument weuse y3 as an investment strategy. Thus we can write

ak y3 f uk1;x;where

x x1;2;x2;3 x1;2ij

i1;...;M;j1;...;L; x2;3i i1;...;M

:

Fig. 1. Three-layered network.

Applications 38 (2011) 1299112998/ Xn1k1

log1 f uk1;xxk: 2

n n1 n1 n1 n

withFor maximization of (2), we empl

Recommended