[ieee 2007 ieee international symposium on intelligent signal processing - alcala de henares, spain...

6
Application of Computation Intelligence Techniques for Energy Load and Price Forecast in some States of USA Joao C. Mourdo, Ant6nio E. Ruano Centre for Intelligent Systems, Faculty of Sciences and Technology, University of Algarve, Campus de Gambelas 8005-139, Faro, Portugal Abstract - The purpose of this paper is to forecast the load and encompasses an agreement between the producer and the the price of electricity, 49 hours ahead. To accomplish these consumer which establish the quantity of energy, price and the goals, computational intelligence techniques were used, specifically time during which the trade process will be performed. The artificial neural networks and genetic algorithms. The neural networks employed are RBFs (Radial Basis Functions), fully other wa s the pool based scheme, lin this case [2] roducers connected and with just one hidden layer. The genetic algorithm present the offer of production, including the prices. Then, the used was MOGA (Multiple Objective Genetic Algorithm), which, operator companies do the same with the purposes of buying as the name indicates, minimizes not a single objective but several. electricity. In the end the results are shown. At each hour the The neural networks are trained for one step ahead, and its output price of energy, both from the side of producers and consumers is feedback until 49 hours are calculated. MOGA is used for the is "cleaned" through a market tool called Market Clearin Price input selection and for topology determination. The data used was kindly given by the University of Auburn, USA, and refers to real (MCP). This tool is necessary to avoid the unbalance between data from some North-American states. offer and demand. The work described in this paper is to predict the load and Keywords - Load and price forecast, genetic algorithms, neural price of electricity for the next day. This next day has a lag in networks. time of 25 hours, so we are interested to forecast the load and the price for the next 25 to 49 hours, in relation to the hour when I. INTRODUCTION the prediction is performed. Fig. 1 illustrates the scenario for a prediction performed today at 12 a.m.. As used in [3], for the In the last years several countries have been adopting, prediction a Radial Basis Functions Network (RBFN) is applied progressively, a new paradigm in the market of energy, replacing integrating a Multi-Objective Genetic Algorithm (MOGA) [4]. a traditional centralized and monopolist market environment, The MOGA is responsible for selecting the input terms and by a liberal and competitive one [1]. In this new market, the number of neurons of the neural network. The output of the energy produced by the power plants can be bought by operator RBFN is feedback until 49 hours are calculated. The data used companies which then re-sell the energy. The objective of in this work were kindly given by the University of Auburn, this market liberalization is trying to optimize the process of Alabama, and refers to some North-American states. production and cost of energy. In the electric case even a greater reason exits to optimize Interested the process: the electricity is an energy of difficult storage, prediction interval so if the electricity is not spent in some short period of time, then this energy will inevitably finish to disappear. Therefore Lag of 25 hours it is important, both to the producer and operator companies, to predict the load of electricity, so they can optimize the relation production/demand of energy. 12 a.m. 1 p.m. 12 am. Time Normally, the electric market commerce of energy is Present performed in two ways: bilateral contracts or pool based. The bilateral contracts are usually used for long terms, and Fg .TM IEO RDCIN 1 -4244-0830-X/07/$20.OO ©2007 IEEE.

Upload: antonio-e

Post on 18-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Application of Computation Intelligence Techniquesfor Energy Load and Price Forecast in some States

of USAJoao C. Mourdo, Ant6nio E. Ruano

Centre for Intelligent Systems, Faculty of Sciences and Technology,University of Algarve, Campus de Gambelas

8005-139, Faro, Portugal

Abstract - The purpose of this paper is to forecast the load and encompasses an agreement between the producer and thethe price of electricity, 49 hours ahead. To accomplish these consumer which establish the quantity of energy, price and thegoals, computational intelligence techniques were used, specifically time during which the trade process will be performed. Theartificial neural networks and genetic algorithms. The neuralnetworks employed are RBFs (Radial Basis Functions), fully other wa s the pool based scheme,lin this case [2] roducersconnected and with just one hidden layer. The genetic algorithm present the offer of production, including the prices. Then, theused was MOGA (Multiple Objective Genetic Algorithm), which, operator companies do the same with the purposes of buyingas the name indicates, minimizes not a single objective but several. electricity. In the end the results are shown. At each hour theThe neural networks are trained for one step ahead, and its output price of energy, both from the side of producers and consumersis feedback until 49 hours are calculated. MOGA is used for the is "cleaned" through a market tool called Market Clearin Priceinput selection and for topology determination. The data used waskindly given by the University of Auburn, USA, and refers to real (MCP). This tool is necessary to avoid the unbalance betweendata from some North-American states. offer and demand.

The work described in this paper is to predict the load andKeywords - Load and price forecast, genetic algorithms, neural price of electricity for the next day. This next day has a lag innetworks.

time of 25 hours, so we are interested to forecast the load andthe price for the next 25 to 49 hours, in relation to the hour when

I. INTRODUCTION the prediction is performed. Fig. 1 illustrates the scenario for aprediction performed today at 12 a.m.. As used in [3], for the

In the last years several countries have been adopting, prediction a Radial Basis Functions Network (RBFN) is appliedprogressively, a new paradigm in the market of energy, replacing integrating a Multi-Objective Genetic Algorithm (MOGA) [4].a traditional centralized and monopolist market environment, The MOGA is responsible for selecting the input terms andby a liberal and competitive one [1]. In this new market, the number of neurons of the neural network. The output of theenergy produced by the power plants can be bought by operator RBFN is feedback until 49 hours are calculated. The data usedcompanies which then re-sell the energy. The objective of in this work were kindly given by the University of Auburn,this market liberalization is trying to optimize the process of Alabama, and refers to some North-American states.production and cost of energy.

In the electric case even a greater reason exits to optimize Interestedthe process: the electricity is an energy of difficult storage, prediction intervalso if the electricity is not spent in some short period of time,then this energy will inevitably finish to disappear. Therefore Lag of 25 hoursit is important, both to the producer and operator companies, to

predict the load of electricity, so they can optimize the relationproduction/demand of energy. 12 a.m. 1 p.m. 12 am. Time

Normally, the electric market commerce of energy is Presentperformed in two ways: bilateral contracts or pool based.The bilateral contracts are usually used for long terms, and Fg .TM IEO RDCIN

1-4244-0830-X/07/$20.OO ©2007 IEEE.

14X 104 B. Introduction to Price Forecasting

12 The other quantity that we are interested to predict is the priceB0-roL LIIN1IIIL i I of electricity. This quantity presents a large unpredictability.

Fig. 2 shows the evolution of the price over one entire year._J8 ~~~~~~~~~~~~~Elementary statistic analysis shows that there are large instant's

variations, for example the average standard deviation is 26.66 US Dollars ($), considering a sampling interval of 24 hours.4 The average standard deviation drops to the 9.8$ if we considerJan Feb Mar April May June July Aug Sep Oct Nov DecTime (h) a sampling interval of 2 hours, i.e. the standard deviation of

Fig.3. EVOLUTION OF THE LOAD OVER THE YEAR.two consecutive hours, although in some points the standarddeviation arrives to 110$ between two adjacent samples. Fromthis scenario we can really understand that the prediction of price

300 is a difficult task and it is an essential tool that allows companiesto save and profit millions of Dollars or Euros.

250 Truly there exists a relation between load and price, which200 allows to say that normally when the load rises the price also

rises, and vice-versa. Although in general correct, this cannot0)150-be considered always true. To explain this we first need to

100 know the scenarios in which the producer companies operate.50 A producer company has in general a certain number of power0 vl!7qt|111llg!rlGlrlrlrplants available. These power plants, which normally differJan Feb Mar April MayJune July Aug Sep Oct Nov Dec from one to the other, from the quantity of electricity produced,

Time (h)and also in the cost of production. In some moment there are

Fig. 2. EVOLUTION OF THE PRICE OVER THE YEAR. plants activated and others deactivated, according to the loaddemands. So, when the demand increases, the producers haveto activate more plants, translating this cost of activation intothe price of the electricity paid by the consumers. On the other

Load forecasting is of vital importance for the electric side, when the load drops, plants are not justified to operate, so

industry or in companies related to it. Electric plants and they are shutdown. However if the load's drop does not justifycompanies around the world rely on these estimations to make a deactivation of the plant, then the instantaneous load has littleimportant decisions, including decisions related with production reflection on the price, as the plant is still operating.or purchase of electricity [5]. Fig. 3 shows the evolution of the To emphasize the relation between load and price, fig. 4

presents the price and the load for the first 100 hours of January.loadag an entir earItc be sen tha the la varies Both quantities were normalized to fit in the interval [0 1].through the seasons; for example we can see that in summer(whenitis hotter) the loadincreases;thesameinwinter,butAnalyzing this figure, the assumption expressed before can be

(whenspritg isdho the load increases;Theresameweicanwint but validated, but again we should not forget the further argumentsin spring and fall the load decreases. Therefore we can conclude'that the load is related with the weather, which has a fundamental stated previously. Together with the factors explained, we can

also add the variation in the price due to other factors, such asrole in the majority of techniques used for prediction. In fact, ulpieoiia isus etc.according to [5], changes in the weather can cause, in extremedays, an increase of a factor of 10 in the energy price. Fromthe 22 reports considered in [6], 13 just use the temperature, 3employ temperature and humidity, 3 use additional parameters Priceof weather evaluation, and 3 just employ load parameters. From 0.45 - -Lathis we can assume that the weather is of great importance to 0.4 k > <Aload forecasting, and we further assume that the weather can be 0.35 idescribed using only the temperature. 0.30

Other influence that the load suffers is related with the class of C025Vthe day, i.e. a working day or weekend day. When in a weekend 0211the load normally drops compared with working days. This 0.5makes sense, as in the weekend the majority of the companiesare closed, so less load iS needed. l l ll0.1

0 20 40 60 80 100Time (h)

Fig. 4. PRICE AND LOAD.

II. RBF NEURAL NETWORK where dmax is the maximum distance between the centres thatwere previous calculated by the OAKM algorithm.

The topology of RBF neural networks involves three layers The training criterion used is given byfully connected with three completely different roles. In theinput layer we have the sources (sensors) which connect the e(c,or) 2 t --V)+t| (4)network to its working environment. The hidden layer appliesa non-linear transformation from the input vectorial space to the where e(c, o) is the sum of the square of the errors, dependent ofhidden layer space. The output is linear, giving the response the center (c) and spread (or) values, t is the vector of targets,of the network to the activation pattern (signal) applied on the JD is the matrix obtained at the output of the hidden layer,first layer. In this sense the network performs a mapping from and + denotes a pseudo-inverse. This training criteria has thethe input space to the output space, passing before through the particularity of not depending of the linear parameters (w), as

intermediate layer. Fig. 5 shows the network topology used in they are computed as a least square problem, whatever valuesthis paper. In the figure, x are the inputs, o are the activation the centers (c) and spreads (or) might take [9]. The terminationfunctions of the neurons and w are the network weights. Note criterion used is the early stopping technique, which stops thethe bias component with the subscript b. Formally, the output is training at the best performance point [10].given by

III. MULTI-OBJECTIVE GENETIC ALGORITHMf (x) = Z=1 wio( lX -xi ) (1)

Genetic Algorithms (GA) are methods of stochastic search.In this work the activation function (so) as the Gaussian form, Typically the algorithm is exemplified using the Darwin theory:

lXj _C, 12 \ there is an initial population of individuals, all of them(i(xj) = exp (_ 2j ) for 7i,ci > 0 e xj E R. (2) competing for the same objective. In each iteration, each one of

the individuals in the current population is evaluated, and a set ofwhere c and or are respectively the center and the spread and xj individuals is selected to form offsprings, therefore generatingis one input pattern. a new population. From times to times, a randomly selected

individual suffers a mutation. When applying this algorithm in__1 real problems is necessary to give some stopping criterion. Then

the best individuals are chosen and the algorithm finishes.MOGA requires that a number of several criteria are

I>t / 'P2 \t\, satisfied. Those objectives are not always in accordance betweenX2X<\ f(X) * ,themselves, having many times conflicts between them. The

better quality in one objective can result in a degradation ofanother one. Because of that, most of the times multi-objectiveproblems do not have a unique optimal solution. In these cases,

xm \\ (Pn / Xthere is a set of solutions that satisfy the used criterions. Asolution is said to be non-dominated or Pareto-optimal if thereis not another solution that can satisfy better all the criteria [4].

(Pb IV. IMPLEMENTATIONFig. 5. NEURAL NETWORK SCHEMATIC.

A. Input Data

A. Training Method For the implementation of this work five variables were used:price, load, temperature, dew point and humidity. Each one of

The network is trained in an off-line supervised learning. these variables is sampled hourly, during an entire year. BesidesThe Levenberg-Maquardt (LM) method is used to minimizes the reals values, for each variable we considered also an averagethe error between the target and the output. The criteria of 24, 168 and 720 hours, corresponding, respectively, to aused is the new training criterion proposed and analyzed in daily, weekly and monthly average (the technique used is the[7]. The algorithm used for the initialization of the centres of so called moving average). Using it, it is possible to filter highthe activation functions (so) is the Optimal Adaptive K-means frequencies. Besides, to avoid numeric problems, all the data(OAKM) algorithm [8], which is an improved version of the K- was normalized to fit in the interval [0 1]. This input universemeans Clustering [9]. For the spreads the heuristic described in is divided in three sets: training, generalization and validation.[10] is used: Data is available in a time line of 8017 hours (corresponding to a

year of 365 days); for training we chose the first 4010 hours, forCF~~i=d =l ..,n (3) generalization the interval [4011 6017], and for validation the

TABLE I. The difference between equations (6) and (7) is the use of theMOGA FORMULATION. median values of the quantity instead of the instant values,

Objectives Goals making (7) less sensitive to spikes and zero values. Please alsoObjectives 0.03 Goalsnote that the performance of the models is also evaluated by theOSATE 0.03OSAGE Minimize sum of the square of the errors in the validation set (OSAVE).

max(R(ET)) Minimizelwl 10 C. Experiences for Load Forecasting

B. Model Evaluation To find the best way to predict load, several experiences were

performed in order to disguise a strategy that allows the best

MOGA needs, in each iteration, to evaluate the individuals, results to be obtained. In this way, three experiences were carry' ' ~~~~~~~~~~out:case 1, load forecasting using all the weather parametersThe objectives can be classified in two groups: complexity and ava:labe (tepeature,adew point and thumidity) case2 s

available (temperature, dew point and humidity); case 2, usingperformance objectives. In terms of complexity the Euclidean just temperature to describe weather; and case 3, using only load.norm of the weight's norm, wl , is used. To measure theperformance the sum of the squared one-step-ahead errors for the Table it sarizes therte experiences.

traiing OSAT), enerlizaion(OSAE), nd lso meaure By far it can be seen that the experience made in case 2 (withjusttraining .OSATEgeneralization (OSAGE), and alsoatemperature as weather variable) has the best results. Looking toof error for long term prediction, R(ET), are employed. The the results for the prediction interval of interest (explained onlong term performance of the model prediction is measured fig. 1), there is a difference of 1.3% for case 1 and around 3.5%using all the data sets: DSt, DSg and DSV, within a prediction f th ' 3horizon of 49 steps (49 hours). At each starting point theprediction error e[k] is computed over the prediction horizon D. Experiences for Price Forecasting(1-*49 steps ahead). Consider the following matrix:

[e(1,l1) ... e(1,49) i Analogously with the load forecasting, some experiencesE ... ... (5) were performed to discover the best strategy to predict price.

L e(p, 1) *-- e(p, 49) J Therefore, two experiences were executed: case 1, priceforecasting using the electricity load; case 2, using just price as

where each column is indexed to one prediction horizon, and input information. Table III shows a summary of the experiencesp denotes the dimension of the set that one is evaluating developed. The results obtained show a great similarity in terms(training, generalization or validation). In this way the long term of RMSE. From these results we can conclude that the twoprediction error is given by the maximum of the Root Mean experiences are very similar in terms of performance.Square of ET, max(R(ET)), which indicates the maximumRMSE for each prediction instant over all the horizons [9]. V. RESULTSTable I presents how MOGA was formulated. For the OSATEan error of 0.03 or less (one should note that input data is Since the best load forecasting experience made use ofnormalized between 0 and 1) is pursued, while both OSAGE temperature, then temperature also has to be predicted. Havingand the maximum long term error (,max(R(ET))) should be it, then a real load forecasting can be performed. After this, theminimized. The norm of the weight's network, lww l, was also estimated load will be used for the price forecasting.formulated as a restriction and should be less than 10.

Besides the methods that MOGA used to evaluate theresults, we also measured the performance of the results using TABLE II.percentage methods. Among these type of methods, the most SUMMARY OF THE LOAD EXPERIENCES RESULTS

famous is the Mean Absolute Percentage Error (MAPE) defined 1 hour 49 hours Interval of interestin [11]: RMSE (MW) 1091.5 6505 6086.3 Case 1

MAPE(%) 0.977 6.5761 6.18421 N IQ"Q RMSE(MW) 1176.7 5251.2 4888.6 Case 2

MAPE(%) N E AQAF X 100 (6) MAPE(%) 1.1083 5.2415 4.8561

whrAQsth ctaalefoh quatiy, heorRMSE(MW) 955.33 9908 8938.1 Case 3*i ~~~~~MAPE(%) 0.79071 9.7598 8.3513 ____where QA iS the actual value for the quantity, QF the forecasted, MAE% 0791 9758 833

N the number of points in the pattern, and i the index. Although(6) works quite well for load, for the price the formula does not TABLE III.give reliable results. This is because price changes abruptly and SUMMARY OF THE PRICE EXPERIENCES RESULTSalso it can have zero values, which would make the equation (6) ] 1 hour l49 hours [ Interval of interest]____to have infinite values. For this reason, the author of [12] present RMSE($) -23.151 31.897 31.584 -Case 1another way to measure the error, this time defined by: MAPE(%) 25.859 63.32 49.566

MAPE*(%) 26.361 36.866 36.905 ____iNi-,p RMSE($) -23.022 31.71 - 31.41 -Case 2

MAPE*(%) 1ZiN IQA-QFI x 100 (7) MAPE(%) 22.656 55.77 45.823- mdia. MAPE*(%) 25.361 38.032 37.001 ____

A. Temperature Forecasting Although these results are worse than the ones calculated whenperforming load forecasting experiences, it should be noted that

To predict temperature only past temperature values were the previous results used real values of temperature, and in thisused. The neural network take as input universe the information case, predicted values are employed.of 50 lags, taken from the last 100 hours, and also the threealready referred averages (daily, weekly and monthly average). TABLE IV.Fig. 6 shows the temperature results. The graph above depicts LOAD FORECASTING RESULTS.the prediction for just 1 hour ahead, while the middle graph ________________________________________________shows the 49 hour prediction. The bottom graphic shows the Errmr 1 ste aed 49t a Iro epredictioninthe interval of interest, i.e, at each day at 12:00, Mivemge(MW) 234686 515067 455991the 24 values shown next correspond to the predictions between Maximum(MW) 13095 19158 1994225 and 49 hours ahead. A MAPE of 9% and a RMSE of 4.80 RMSE(MW) 3523.4 6351.9 5683.5degrees Farenheight were obtained for the interval of interest. MAPE(%) 3.4826 6.4462 5.7975

C. Price Forecasting0l

For price forecasting, the load predicted in the last section wasused as inputs, including obviously price lags. The scheme wasthe usual, lagged terms for both variables and the three averages

n 40-. . . . . . (daily, weekly and monthly). Fig. 8 shows the price resultsobtained. As it can be seen from the graphics, price is a very

0 50 100 150 200 250 300 350 volatile quantity, being difficult to forecast. Table V presents theTime (h) results for price prediction. The RMSE attains 33$, while the

80 MAPE*(%) is around the 39%. The results obtained are very farU 1 : A n | Real Forecasted

.. ..;.A.

a)|\X\va/W fil x~~~~~~~~~~~~~~~~X104,,,040 l

20 5'0 100 150 200 250 300 35lo 8- ..M ag&80Time(h) 7t::::::::<:U:f:::

0 A 21 Real Forecasted 6- F

201

C 6 0 100 150 200 250 300 350 0 A50100 150 200 250 300 350Time (h)J W ' x lOTime(h)

a0 ......................

Real ForecastedReal ForecastedC 00 50 100 150 200 250 300 350~ 8

Time (h)50 6 ,

Fig. 6. TEMPERATURE FORECASTING. 0( 50 100 150 200 250 300 350

Time (h)x104

B. Load Forecasting 10_-Real_Forecasted|_|

C~~ ~~~~~~~~~~~~~~~~~~~~~~~g.. .....................................20~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0

Inasimilar way, the load forecasting was performed, thiS time -o 8- 1jA 8r|41d^usig te temperatureesiated onte las secto. Adtoaly .( 7 ;-1- I-----ll----:--- .- -lto the load and temperature past information, a vector that 6g,VVV0 l V b/classifies the days as work and weekend days was added. 'lHolidays were included in the weekend days class. Fig. 7 shows 50 100 150 200 250 300 350the load results. As it can be seen, a very good performance Time (h)was obtained, specially considering that load estimations areobtained based in predictions of temperature, i.e, are the resultsof two cascaded models. Table IV presents the numeric results. Fig. 7. LOAD FORECASTING.

from the performance exhibited for the load case, and are slightworse than the ones obtained without using load as an input (see 200 _ _table III, Case 2) F

TABLE V. 0PRICE FORECASTING RESULTS. a°

Error 1 step ahead [ 49 steps ahead Interval of interest °Minimum($) 0.028378 0.034047 0.012849 0 50 100 150 200 250 300 350Average($) 17.572 25.965 25.665 Time (h)Maximum($) 142.67 168.43 167.31 200rRMSE($) 24.008 33.924 33.591 Real ForecastMAPE(%) 27.463 77.042 72.879 C 150MAPE*(%) 26.501 38.825 39.14 . . A

500)

VI. CONCLUSIONS o ....In this work we forecasted both load energy and price using -50 ) 50 100 150 200 250 300 350

neural networks and genetic algorithms, for a time horizon up to Time (h)49 hours. The obtained results allows us to conclude that these 200techniques are well suitable for this kind of problems. Both for V Real Forecastedload and price the predictions are quite reliable, specially for the 'load case where we obtained a error of around 6%. For the price 0case the prediction is more difficult to be performed, as price is > 50a very volatile variable, presenting a very large number of spikes 0o.which are very difficult to predict. Despite this, as it can be seen _50_ _ _i_ i_ _ _from the results, the forecast obtained for the price follows quite 0 50 100 150 200 250 300 350well the evolution of the real price. Time (h)

In this kind of problems it is difficult to perform comparisonswith other works in the same area. The first reason has to Fig. 8. PRICE FORECASTING.do with the time horizon. We cannot compare projects whichprediction horizons different from ours. Second, the results areclose related with the data used. If we had used a different [7] P. Ferreira and A. Ruano, "Exploiting the separability of linear anddata, for example from other country, then the results would be nonlinear parameters in radial basis functions networks," in Proc.

of the IEEE International Symposium on Adaptive Systems for Signaldifferent, as the evolution pattern changes from place to place. Processing, Communications and Control, Lake Louise, Alberta, Canada,

Another important point should be noted. Most of the 2000,pp.321-326.copais* r ntable to et the evolution of load at real time, [8] C. Chinrungrueng and C. Squin, "Optimal adaptive k-means algorithmcompanies are not able with dynamic adjustment of learning rate," IEEE Trans. on Neural

but only with delays up to several hours. This is why a forecast Networks, vol. 6, no. 1, pp. 157-169, Jan. 1995.that has an horizon of two days, can be of much help in real [9] A. Ruano, Ed., Intelligent Control Systems using Computationalcases. Intelligence Techniques, IEE, 2005.

[10] s. Hayin, Neural Networks: A Comprehensive Foundation, PrenticeHall, 1998.

REFERENCES [11] P. Mandal, T. Senjyu, N. Urasaki, and T. Funabashi, "A neural networkbased several-hour-ahead electric load forecasting using similar days

[1] B. P. Gallachir, C. V Chiorean, and E. J. McKeogh, "Conficts between approach," Energy Conversion and Management, vol. 47, pp. 2128-electricity market liberalization and wind energy policies," in Proc. of 2142, ep.2006.the 2002 Global Wind Energy Conference, Paris, France, 2-5 Sep. 2002. [12] H.Y Yamin, S.M. Shahidehpour, and Z. Li, "Adaptive short-term

[2] J. Contreras, A. J. Conejo, and S. de la Torre, "Experience with an electricity price forecasting using artificial neural networks in theelectricity market simulation tool," Production Planning & Control, vol. restructured power markets," Electrical Power & Energy Systems, vol.14, no.2, pp. 135-145, 2003. 26, no. 8, pp. 571-581, 2004.

[3] P. Ferreira, A. Ruano, and C. Fonseca, "Genetic assisted selection ofrbf model structures prediction," in Proc. of the IEEE Conference onControl Application, Istanbul, Turkey,, 2003, pp. 576-581.

[4] C. Fonseca and P. Fleming, "Multiobjective optimization and multipleconstraint handling with evolutionary algorithms - part i: Unifiedformulation," IEEE Trans. on Systems, Man, and Cybernetics, vol. 28,no. 1, pp. 26-37, Jan. 1998.

[5] J. Chow, F. Wu, and J. Momoh, Eds., Applied Mathematics forRestructured Electric Power Systems: Optimization, Control, andComputational Intelligence, Springer, 2004.

[6] H. Hippert, C. Pedreira, and R. Souza., "Neural networks for short-term load forecasting: A review and evaluation.," IEEE Trans. on PowerSystems, vol. 16, no. 1, pp. 44-55, 2001.