pattern search using genetic algorithms and a neural ... · pattern search using genetic algorithms...

15
Complex Systems 8 (1994) 295- 309 Pattern Search Using Genetic Algorithms and a Neural Network Model Shi g etoshi Nara Department of Electrical and Electronic Engineering, Faculty of Engineering, Okayama University, Tsuchim anaka 3-1-1, Okayama 700, Japan Wol fgang Banzhaf* Central R esearch Laboratory, Mitsubishi Electric Corporation, Ts ukaguchi Honm achi 8-1-1, Amagasaki, Hyogo 661, Japan Abs tr a ct . An informat ion processing task that generates combina- torial explosion and program complexity when treated by a serial al- gorithm is investigated using both genetic algorithms and a neural network model. The task in question is to find a target memory from a set of stored entries in the form of "attractors" in a high-dimensional state space. The representation of entries in the memory is distributed ("an auto associative neural network" in this paper) and the problem is to find an attractor under a given access information where the uniqueness or even existence of a solut ion is not always guaranteed (an ill-posed problem). The genetic algorithm is used for gene rat ing a search orbit to search effectively for a state t hat sat isfies the access condition and belongs to the target attractor basin in the state space. The neural network is used to retrieve the corresponding entry from t he networ k. The results of our computer simu lat ions indicate that the present method is superior to a search method t hat uses a Mar ko- vian random walk in state space. Our techniques may prove useful in the realization of flexible and adaptive information processing, since patt ern sear ch in a high-dimensional state space is common in various kinds of parallel information process ing. 1. Intr odu ctio n Although great progress has been m ade wit h m od ern LSI comp ut ers based on the von Neumann type of serial algorithms, there has been growing interest "Current address: Department of Computer Science, Dortmund University and Infor- matics Center Dortmund, Joseph-von-Fraunhofer-Str aBe 20, 44227 Dortmund, Germany

Upload: others

Post on 15-Jun-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Complex Systems 8 (1994) 295- 309

Pattern Search Using Genetic Algorithms and aNeural Network Model

Shigetoshi NaraDepartm ent of Electrical and Electronic Engineering,

Faculty of Engineering, Okayama University,Tsuchim anaka 3-1-1, Okayama 700, Japan

Wolfgang B a nzhaf*Central Research Laboratory, Mitsubishi Electric Corporation,

Ts ukaguchi Honmachi 8-1-1, Amagasaki,Hyogo 661, Japan

Abstra ct . An information processing tas k th at generates combina­to rial explosion and program complexity when tr eated by a serial al­gorithm is investi gat ed using both genet ic algorit hms and a neur alnetwork model. The tas k in question is to find a target memory froma set of stored entries in the form of "at t ractors" in a high-dimensionalstate space. The representation of entries in the memory is distributed("an auto associat ive neur al network" in th is pap er) and t he problemis to find an attractor under a given access information where theuniqueness or even existence of a solut ion is not always gua ranteed(an ill-posed problem). The genet ic algorithm is used for generatinga search orbit to search effectively for a state t hat satisfies the accesscondition and belongs to the target at t ra ctor bas in in the state space.The neural network is used to retri eve the corr esponding ent ry fromt he networ k. T he results of our comp ut er simulations indicate thatthe present meth od is superior to a search meth od that uses a Mar ko­vian ra ndom walk in state space. Our techniques may prove useful inthe realization of flexible and adapt ive informati on processing, sincepattern sear ch in a high-dim ensional state space is common in variouskinds of parallel informat ion process ing.

1. Introd uction

Alt houg h great progr ess has been made wit h modern LSI computers based ont he von Neumann typ e of seri al algorit hms, t here has been grow ing int er est

"Current address: Department of Computer Science, Dortmund University and Infor­matics Center Dortmund, Joseph-von-Fraunhofer-StraBe 20, 44227 Dortmund, Germany

Page 2: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

296 Shigetoshi Nara and Wolfgang Banzhaf

in other types of information pro cessing such as par allel or flexible pro cessingtypically observed in biological systems. Serial algorit hms run into problemswith (1) combina torial explosion and (2) program complexity in realizingflexible functions. Genera lly speaking, the main problem of flexible inform a­tion pro cessing is th e fact that there are too many degrees of freedom forsequent ial control. One way to improve flexibility in information pro cessingis the applicat ion of nonlin ear dynamics, including chaos [1]. Nonlin ear dy­namics has been char acterized as "emerging complex behavior genera t ion"[2-4]. The grea t variety of possible dynamical st ructures in a chaot ic systemsuggests to us th at an app licat ion to complex information processing mayavoid combinatorial explosion and/or cont rol compl exity [5- 7].

In order to make t he problem clearer and more pr actical , we sha ll con­cent rate on a par ticular information pro cessing t ask . However, if we restrictthe informati on pro cessing cont ext too severely, it will lead to a less thanint eresting result because of an ad hoc solut ion. The t rade-off is tha t restric­tion usually allows easier modeling and formul ation. Therefore, in settingthe cont ext, one attempts to maintain the universalit y of the applicat ion toa wide field of pro cessing. We propose pattern search in a high-dimensionalstate space [8, 9, 12], in which a set of patterns are st ored in the form ofat t rac tors [10]. An attractor is a st ate surrounded by a dom ain known as abasin , in which all sequences converge to the at t rac tor over t ime. In moredet ail, the task is to retri eve one or more of th e stored patterns if existe nt ,under the condit ion t hat the given access information is not compl ete or suf­ficient to reach th e t arget pattern directly. This is an ill-posed problem inwhich th e uniqueness or even the existence of a solut ion is not guar ant eed.It is a desirable function of the inform ation pro cessor to solve this complexproblem efficient ly, and th ere are two important points with respect to thesearch pro cess in the st ate space [9].

1. How can the pro cessor quickly and efficient ly find the t arget basin fromambiguous access information?

2. If there is no stored pattern that sat isfies the access information, can thesearch proc essor generate information close to the requested pattern ?

Several methods are candidates for the envisioned funct ion: (1) simulatedannealing (for an example of a rather sophist icated one, see [11]); (2) chao ticdynamics [7, 8, 9, 12]; (3) geneti c algorithms (i.e., evolut ionary st ra tegies);(4) neural networks; (5) cellular automata [13, 14]; and (6) mod e-comp etitivenonlinear dyn amics [15]. In t his pap er, we employ a method th at uses botha neural network and a genet ic algorit hm. The former has a long historyand became an especially act ive field during t he past decade as a powerfulmethod for parallel processing [16, 17]. Associat ed memory and classificationof highly complicated signals [18] are two prominent applications of neur alnetworks. Genetic algorithms have also attracted much attent ion in recentyears , especially in the field of optimization in high dimensions [19-23].

Page 3: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Neural Network Model

2. The neural network model

297

To begin with, let us start with a description of the state space. Without lossof generality, we employ a space of st ates (image pat terns) t hat consist of20 x 20 pixels wit h one neuron correspond ing to each pixel. Neural act ivityis restrict ed to two st at es, + 1 (firing state ) and -1 (non-firing state) . Eachpattern is specified by a 400-dimensional state vector v = (VI ,V2, . .. , V400),and the state space consists of all possible 400-bit pat terns (the vert ices of ahypercube in 400 dimensions). In the present mod el, each neuron is assumedto be coupled with every other neuron, and a pair 's coupling st rengt h isrepresent ed by a synap t ic connect ion matrix, the dimension of which is 400 x400. In the state space, we store 30 pat terns as "at t ractors."

We employ

(

400 )vi(t +1)= 8 2:. Tijvj(t)

J=1

(1)

as the t ime development rule of this neural network model, where each neu­ron is represented by a discrete variable Vi = ± 1, and Tij is the synapt icconnect ion matrix. The mapp ing 8 is a step function. With respect to the30 pat terns used for actual simulat ions, the overlap between them is rela­t ively large, as can be observed from Figure 1. Thus, retri eval performanceis bad if an autocorre lat ion connect ion matrix is used. We therefore useorthogonalizat ion of patterns [24, 25J by introducing adjoint state vectorsvl (a = 1,2 , . .. , 30) defined by

vl. v{3 = 000(3, vl = 2:. aa-yv -'(l a = 0 -1

, 0 00(3 = V a . V (3 (2)-y

where a is the inverse matrix of a 30 x 30 correlat ion matrix, the elements ofwhich are defined by the scalar produ cts between two pattern vectors . T hesynapt ic connect ion matrix is now defined as

30

T = 2:. V a I8i vl ,00= 1

(3)

also known as the pseudo-inverse of t he autocorrelation matrix. The matrixT;j is symmet ric, so the energy function E = -L,ijViTijVj is a Lyapu nov func­tion for this system. Since t he numb er of stored patterns is much smallerthan t he tot al number of neurons, the resulting basins of attract ion in statespace tend to be quite large, and it is expected that much information is dis­tribut ed among t hese basins. We call this information "seeds" for the access.The "seed" contains many partial features of target patterns (att ractors) .Once some seed cont aining the partial feature of an attractor is given to theneural network , it is ent rained into the att rac tors, as if one ret rieved thespecified feature by glancing at the shape of a face. However, in t he presentart ificial neural network model, it should be noted that efficiency of retr ievalst rongly depends on the topological st ructure of the basins in the st ate space .

Page 4: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

298 Shigetoshi Ner« and Wolfgang Banzhaf

F

••••••••:

E

--- --• • _. .- I.• • I I

~ ~.~ 11 ·:· ...·:·~." -- ..• I - I- __- I. .-

I -- II- II

e :: l[.~:t.~8j.©t~~:J

1 23456ABeD

Figure 1: Thirty memor y patterns consist ing of 20 x 20 pixels (neu­ron s). We assume the fourth pattern among t he face pat tern s (thirdrow in the figur e) to be the t arget of the memor y search.

By introducing t he state space of the neural network, our ill-posed prob­lem of pattern search becomes more practical. Suppose th ere is an accessorwho wants to find a face from a data base comp osed of many different facesusing ambiguous access information such as the feature of eye shape . In oursimulation, we search the state space for face patterns that have a specificeye-shap e feature. Face patterns consist of 400 bits of inform ation , bu t theeye-shape feature consists of only 40 bit s. We gave eye-shape dat a (40 bit s)of the fourth face pat tern in the third row of Figure 1 as access inform ation.The search algorithm is describ ed in th e next sect ion.

3. The genetic algorithm

In thi s sect ion, we consider the application of a geneti c algorithm [26] togenerate an effective search orbit in the st ate space .

Let us define a gene st ring as consisting of 401 elements, where the firstelement Xo is a header containing the evaluat ion value of the gene and th eother 400 elements Xi are a state vector. We introduce a certain number ofthese genes and taken together they form a gene pool. Here, as in most GAs,two different kinds of operations are performed: "mutat ions" and "recombi-

Page 5: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Neural Net work Model 299

nations." The form er are operations that randomly change a certain numberof components of a gene. The lat ter are operati ons in which two or mor egenes int eract to genera te a new gene string compose d of a certain numberof elements from each parent st ring. In bo th operations, the elements arechosen using random number generators .

In the following , we give a bri ef description of the possible operatio ns.

3.1 Mutation

Mutations in the pr esent pap er cons ist of the following three op erations.T hey are :

1. Single flip: choose a gene; generate a random number 1 :S n l :S 400;flip compo nent n l .

2. Complement : choose a gene; generate two random numbers 1 <n l , n2 :S 400; flip all compo nents between n l and n2 .

3. Inversion: choose a gene ; generate two random numbers 1 :S n l , n 2 :S400; inver t the ord er of components between n l and n2 .

3.2 Recombination (crossover)

Choose two genes , say, genes , ex and {3 , an d generate two ra ndom numberssuch that 1 :S nl , n2 :S 400. Take n3 = Min[nl + n2, 400] and exchange thecomponents of genes ex and {3 between nl and n3 .

These operations generate st rings that are then selected as follows.

3 .3 Selection criteria

Each trial st ring, the resul t of an app licat ion of one of the above operations,is put into the neural network as an initial patt ern. The neural network isupdated according to equation (1) until t he output converges on a pattern.Then the converged pattern is evaluated in comparison with the given accessinformation and subsequ entl y replaces the pr edecessors from which it wasgenerated if it possesses high er or equal qu ality. It is discard ed if it s qualityis lower than that of it s predecessors. Since every st ring carr ies its own eval­uation value, this is a to tally local select ion method. It has been success fullyapplied in the case of the travelling salesperson problem [26]. Figure 2 showsthe overall process.

4. Results of the simulations

A simulation was carr ied out that employed nin e genes in the gene pool andused both mutation and recombin ation processes as discussed in the pr evioussect ion . A comparison was done between random search and genetic search .In bo th cases, we started with randomly chosen pat tern s. Figure 3 showsint ermediate patterns using the GA . Not e that all t he pat terns correspond

Page 6: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

300 Shigetoshi Nara and Wolfgang Banzhaf

- 0

ge ne pO O I j0--............ \0----

o~

0-- -

r a nd o n pi c k up

0 - - -

o

~',~

Of---- -

bett er o r' eqU:l I

ra ndoa p i c k UP

Of---- -

o

01--- -

d i scard

Figure 2: Overall algorithm of th e pattern search simulation. Notet he following. [1] In the mutat ion pro cess, when we app ly one of themutat ion operat ions defined in sect ion 3, the sequence of operat ionin the repeating pro cess is t aken as cyclic, i.e, [-t mut at ion.1 -tmut at ion.2 -t mutation.3 -t] . [2] The generated pattern (gene) isgiven t o th e neural network as an initial condit ion and th e recurrentupd at ing of firing patterns is done until it converges. [3] If any con­verged pattern possesses the t arget feat ure, the search process sto psand is regarded as successful; otherwise, t he evaluation value is com­pared with the predecessor (gene). During the recombin ation process,the evaluation procedure is the same as the mut at ion pro cess.

Page 7: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Neural Network Model 301

(a) :~~...~ ;.~~~ ~i:t:~~-:s:: -.:...... t!i ~ :t.~Ji~•• ~-.J:.i ~-:~.. .:-:-':;~:..=t,.· -.::..:: ....- ....-:-:r-...... ..·fi.·_

(b) ..- ~1 •• .:-1 r\J:;

Figure 3: (a) Init ial random patterns given to nine genes; (b) Pat ternsafter th e first generation.

Page 8: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

302

(c)

-I·~:I:.• I

~

t?;~ .-I. •• .:-_ •.&. _.

Shigetoshi Nara and Wolfgang Banzhaf

(d)

Figure 3: (c) Pat t erns after t he 42nd generat ion (note that the patternin t he left corner is a superimposed pat tern between the shape of thefirst face and the eye, nose, and mouth of the fourth face in Figure 1;(d) patterns after th e 140th generat ion; most patterns have convergedto the target pattern.

Page 9: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Ne ural Ne twork Model 303

(4)

to local minim a of the energy landscap e since th e trial patterns created by GAoperations are mapp ed onto local minim a of energy by the neur al network.

In Figure 3, there appears a pattern worth not ing because it is not asto red pattern . It indicates that one can find spur ious attractors using genet icalgorithms that possess high value or quality.

ow let us turn to an evaluation of the sear ch perform ance. Define f(N)as the ratio between the numb er of successful trials and the total numb er oftrials. One trial consists of a search pro cess involving nine genes with randompatterns as the initial states. A trial is successful if the access informat ion issatisfied within t he required numb er N of iterations. If there is no appearanceof th e target pat tern within N iterat ions, we regard the trial as unsuccessful.The same quant ity is calculated for random search and shown for comparison.

In the case of random search, it is easy to calculate f(N). Note t hat it ispossible to define the one st ep success proba bility p in random search. It issimply the ratio of the bas in volume to the total volume of the state spacesince each ste p can be regarded as completely random. Thus the processis considered to be Markovian and t here is no corre lat ion or memory effectbetween two succeeding steps. Therefore, f(N) has th e form

f (N) =~\1- prp = p 1- (t- P); =l- (l- p)Nr=Q 1 - 1 - p

In order to understand th e dependence of f(N) on N in more detail, oneregards N as a cont inuous variab le. Then one differenti ation yields

d~f(N) = (1 - p)N log C~ p)

= exp ( - N log C~ p)) log C~ p). (5)

This indicates that df (N )/ dN depends on N with exponent ial damp ing inrandom search. We show the results of our simulat ion in Figure 4 and Figure5, both in the case of random search and also geneti c search, where differ­ent iat ion was done numerically over discrete , finite intervals. The resultsfor random search indicate exponent ial damp ing, which is quite plausible asnoted above. On t he other hand, as shown in Figure 5, genetic search indi­cat es a characterist ic distribution of differential success rates as a functionof N . This tendency is not accident al or due to stat ist ical fluctu ati on. Thesame depen dency as shown in Figure 6, obtained for the average success ra teof 1000 samples, was obtained by averaging over 10000 samples. It is aninterestin g quest ion why th e distribut ion funct ion indicates a very differentbehavior in genetic search as compared to a random search.

Genet ic search is superior to random search by almost an order of mag­nit ude as indicated in Figure 7. The same simulat ion was done for differentnumb ers of genes in the gene pool. Results are shown in Figure 8 and aconsiderable improvement can be observed if we increase t he popul at ion sizeM . Note that in order to obtain Figure 8, we divided the total number of

Page 10: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

304 Shigetoshi Nara and Wolfgang Banzhaf

6 0

5 40oo

20

o 2000 40 00 6000Upper bound 01 search step number

Figure 4: Different ial success rate versus upper limit of search ste p inrandom search (1000 samples) .

30

25

20

c"o() 15

10

oo 200 400 6 0 0

Upper bound of search step number800

Figure 5: Different ial success rate versus upper limit of search step ingenetic search (1000 samples) .

Page 11: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Neural Network Model 305

30 0

C:> 20 0oo

10 0

oo 20 0 400 600 800

Upper bound of search step number100 0

Figure 6: Differenti al success rate versus upper limit of search ste p ingenet ic search (1000 samples).

1000 - -- - - - -- - - -_ _ .. r

,

-800 ,

,,:,

600

"C t,

:> ,00 ,

, IRandom SearchI400 , '",,,,,,,

200,,,,

,,,0

0 10 00 2000 30 00 4000 500 0 60 00Upper bound of search step number

Figure 7: Comparison of success rate between random search andgenet ic search.

Page 12: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

306 Shigetoshi Nara and Wolfgang Banzhaf

1000+-""7':~r---'---===--'----L-------'-=====!==-r

800

600

C:Jao

40 0

200

O-¥'-- - - -,- - - - ---r- - - - -----,,----- - - - r--'-o 50 100 15 0

Upper bound of search step number

200

Figure 8: Comparison of success rates for different numbers of genesin the gene poo l.

trials by the size of the population. A furth er improvement can be obtainedby carefully adjust ing the recombin at ion frequency (see Figure 9).

It is interest ing to speculate why t he search using GA is more efficientthan search using a simple random walk. We do not yet have a definit iveanswer, but the following explanat ions seem plausible:

1. A genet ic algorithm can , wit h high probabi lity, detect spur ious attrac­tors having high value or quality, even if the basin is relatively smallcompared to the basins of t he sto red pat terns.

2. There is a memory effect in the development of genes produced bygenetic operations , whereas in a rand om walk, there is no memoryeffect between th e updatin g of pat terns in state space .

Concluding R emarks

1. Geneti c search algorithms are superior to simple random search (Marko­vian random walk) by almost an order of magnitude.

2. The utilizat ion of both mutation and recombination operations im­proves search performance. Optimizat ion of mutu al frequency of oper­at ions is beneficial.

Page 13: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Neural Network Model 307

r

,

,,.200

400

600

800

1000 --r-----'-----------''----;::=d---~,..!.,,~--____r

<=

"oU

0+-- - ---,- - - - ---,-- - - --,- --- -.----- ---1-o 2 0 40 60 80

Upper bound of search step number

Figure 9: Comparison of success rate between different frequencies ofrecombination in genetic search.

3. Sear ch performan ce is improved considerably when the number of genesin the gene pool is increased . T his indicates that the pr esent methodis espec ially suited for parallel pr ocessing.

4. In performing the simulat ion, spur ious attractors were found, many ofwhich were useful in the sense of synt hesizing patterns (informationgen eration) from stored pat te rns .

Acknowledgments

T he authors deeply thank Dr . Peter Davis for his valuable comments .

R eferences

[1] J. A. Kelso, A. J . Mandell, and M. F. Shlesinger, editors, Dynamic Patternsin Complex Syst ems (Singapore: World Scient ific, 1988).

[2] H. G. Schuster, Deterministic Chaos (VCH Weinheim, 1985).

[3] R. L. Devaney, An Introduction to Chaotic Dynamical Systems, (Menlo Park,CA: Benjamin Cummings, 1986).

[4] K. Kaneko, Physica D, 41 (1990) 137.

[5] J. S. Nicolis, Rep. Prog. Phys., 49 (1986) 80.

Page 14: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

308 Shigetoshi Nere and Wolfgang Banzhaf

[6] Neural and Synerg etic Computers, vol. 42, Springer Series in Synerget ics,edited by H. Haken (Berlin: Springer Verlag, 1989).

[7] 1. Tsuda, E. Koerner, and H. Shimizu, Prog. Theor. Phys., 78 (1987) 51.

[8] P. Davis and S. Nara, Tech. Dig. of Int . Conf. on Fuzzy Logic and NeuralNetworks, Iizuka (1990).

[9] P. Davis and S. Nara, page 97 in Proceedings of the First Symposium onNonlin ear Theory and its Applications (1990).

[10] S. Kirkpatrick, C. D. Gellat , and M. P. Vecci, Science, 220 (1983) 671.

[11] Y. Mori, P. Davis , and S. Nara, Journal of Phys ics A, 22 (1989) L525.

[12] S. Nara and P. Davis, Neural Networks, 6 (1993) 963.

[13] J. E. Hopcroft and J. D. Ullman, Introduct ion to Automata Theory, Languageand Computation (Reading, MA: Addison-Wesley, 1979).

[14] S. Wolfram , Theory and Application of Cellular Automata, (Singapore: WorldScient ific, 1986).

[15J H. Haken, Inform ation and Self-Organizat ion (Berlin: Springer-Verlag, 1988).

[16] J . A. Anderson and E. Rosenfeld, editors, Neurocomputing (Cambridge , MA:MIT Press, 1988).

[17] J. A. Anderson , A. P. Pell iioniz, and E. R. Rosenfeld, editors , Neurocomputing2 (Cambridge, MA: MIT Pr ess, 1991).

[18] D. Rum elhart et al., in Parallel Distributed Processing , edited by J. L. Mc­Clelland , D. E. Rum elhart , and t he PDP Research Group (Cambridge, MA:MIT Pre ss, 1986).

[19] D. E. Goldb erg, Genetic Algorithms in Search, Optim izat ion and MachineLearning (Reading, MA: Add ison-Wesley, 1989).

[20] L. Davis , editor, Genetic Algorithms and Simulated Annealing (London : P it­man , 1989).

[21] J . D. Schaffer, editor, Proceedings of the Third International Conference onGenetic Algorithms (San Mat eo, CA: Morgan Kaufm an , 1989).

[22] R. K. Belew and L. B. Booker, editors, Proceedings of the Fourth InternationalConference on Geneti c Algorithms, San Diego (San Mateo , CA: Morgan Kauf­mann, 1991).

[23] H. P. Schwefel and R. Manner , editors , Proceedings of the First Intern ationalConference on Parallel Problem Solving from Nature, Dortmund (Berlin :Springer Verlag, 1990).

[24] L. Personnaz, 1. Guyon, and G. Dreyfus, Phys. Rev. , A34 (1986) 4217.

Page 15: Pattern Search Using Genetic Algorithms and a Neural ... · Pattern Search Using Genetic Algorithms and a Neural Network Model Shigetoshi Nara Department ofElectrical and Electronic

Pattern Search Using GAs and a Neural Net work Model 309

[25] Fuchs and H. Haken , page 33 in Dynamic Patterns in Complex Systems,J. A. Kelso, A. J. Mandell, and M. F. Shlesinger, editors (Singapore: WorldScient ific, 1988).

[26] W. Banzhaf, Biological Cybernetics, 64 (1990) 7.