progress in brain research volume 2:nerve,brain and memory models

289

Upload: devilball

Post on 26-Apr-2015

208 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models
Page 2: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

P R O G R E S S I N B R A I N R E S E A R C H

V O L U M E 2

N E R V E , B R A I N A N D M E M O R Y M O D E L S

Page 3: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

P R O G R E S S I N B R A I N R E S E A R C H

A D V I S O R Y B O A R D

W. Bargmann Kiel

E. De Robertis Buenos Aires

J. C. Eccles Canberra

J. D. French Los Angeles

H. Hyden Goteborg

J. Ariens Kappers Amsterdam

S. A. Sarkisov Moscow

J. P. SchadC Amsterdam

T. Tokizane Tokyo

H. Waelsch New York

N. Wiener Cambridge (U.S.A.)

J. Z. Young London

Page 4: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

P R O G R E S S I N B R A I N R E S E A R C H V O L U M E 2

NERVE, BRAIN AND MEMORY MODELS

E D I T E D B Y

N. W I E N E R Massachusetts Instifute of Technology, Cambridge, Mass. (U.S.A.)

A N D

J. P. S C H A D B Central Institufe for Brain Research, Amsterdam (The Ne fherlanak)

ELSEVIER P U B L I S H I N G C O M P A N Y

A M S T E R D A M / L O N D O N / N E W Y O R K

1963

Page 5: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

SOLE D I S T R I B U T O R S F O R T H E U N I T E D STATES A N D C A N A D A

A M E R I C A N ELSEVIER P U B L I S H I N G C O M P A N Y , I N C .

52 V A N D E R B I L T A V E N U E , N E W Y O R K 17, N.Y.

SOLE D I S T R I B U T O R S FOR G R E A T B R I T A I N

ELSEVIER P U B L I S H I N G C O M P A N Y L I M I T E D

12B, R I P P L E S I D E C O M M E R C I A L ESTATE

R I P P L E ROAD, B A R K I N G , ESSEX

This volume contains a series of’ lectures delivered during a symposium on

which was held as part of the Second International Meeting of Medical Cybernetics at the Royal Academy of Sciences at Amsterdam

from 16-18 April, 1962 The organization of the Symposium was partly supported by grants from

The Netherlands Government Philips ( Eindhoven, The Netherlands) I. B. M . (Amsterdam, The Netherlands)

Electrologica (The Hague, The Netherlands)

C Y B E R N E T I C S OF T H E N E R V O U S SYSTEM

L I B R A R Y O F C O N G R E S S C A T A L O G C A R D N U M B E R 63-17304

W I T H 96 I L L U S T R A T I O N S A N D 1 T A B L E

A L L R I G H T S R E S E R V E D

T H I S BOOK OR A N Y P A R T T H E R E O F MAY N O T BE R E P R O D U C E D I N A N Y FORM,

I N C L U D I N G P H O T O S T A T I C OR M I C R O F I L M FORM,

W I T H O U T W R I T T E N P E R M I S S I O N F R O M T H E P U B L I S H E R S

Page 6: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

List of Contributors

W. R. ASHBY, University of Illinois, Urbana, Ill. (U.S.A.). V. BRAITENBERG, Centro di Cibernetica del Consiglio Naziouale delle Ricerche,

Istituto di Fisica Teorica, Universita di Napoli (Italy). J. CLARK, Burden Neurological Institute, Bristol (Great Britain). J. D. COWAN, Massachusetts Institute of Technology, Cambridge (U.S.A.). H. FRANK, Institut fur Nachrichtenverarbeitung und Nachrichtenubertragung der

F. H. GEORGE, Department of Psychology, University of Bristol, Bristol (Great

E. HUANT, 9 Avcnue Niel, Paris (France). P. L. LATOUR, Institute for Perception Physiology, Soesterberg (The Netherlands). P. MULLER, Institut fur Nachrichtenverarbeitung und Nachrichtenubertragung der

Technischen Hochschule Karlsruhe, Karlsruhe (Deutschland). A. V. NAPALKOV, Moscow State University, Moscow (U.S.S.R.). P. NAYRAC, Clinique Neurologique et Psychiatrique de I’Universitt de Lille, Lille

A. NIGRO, Via Garibaldi, Messina (Italy). G. PASK, System Research Ltd., Richmond, Surrey (Great Britain). N. RASHEVSKY, Committee on Mathematical Biology, University of Chicago, Chicago,

J. L. SAUVAN, 43, Boulevard Albert-ler, Antibes (France). J. P. SCHAD~, Central Institute for Brain Research, Amsterdam (The Netherlands). N. STANOULOV, U1.k. Peitchinovitch, Sofia (Bulgaria). M. TEN HOOPEN, Institute of Medical Physics TNO, National Health Research

A. A. VERVEEN, Central Institute for Brain Research, Amsterdam (The Netherlands). H. VON FOERSTER, University of Illinois, Urbana, 111. (U.S.A.). C. C. WALKER, University of Illinois, Urbana, Ill. (U.S.A.). 0. D. WELLS, Artorga Research Group, Beaulieu, Hants (Great Britain). N. WIENER, Massachusetts Institute of Technology, Cambridge, Mass. (U.S.A.). J. ZEMAN, Philosophical Institute, Czechoslovak Academy of Sciences, Prague

G. W. ZOPF, JR., University of Illinois, Urbana, Ill. (U.S.A.).

Technischen Hochschule Karlsruhe, Karlsruhe (Deutschland).

Britain).

(France).

I l l . (U.S.A.).

Council, Utrecht (The Netherlands).

(Czechoslovakia).

Page 7: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

Other volumes in this series:

VOLUME 1

Brain Mechanisms

Specific and Unspecific Mechanisms of Sensory Motor Integration

EDITED BY G. MORUZZI, A. FESSARD AND H. H. JASPER

VOLUME 3

The Rhinencephalon and Related Structures

EDITED BY w. BARGMANN AND J. P. S C H A D ~

VOLUME 4

Growth and Maturation of the Brain

EDITED BY D. P. PURPURA AND J. P. S C H A D ~

VOLUME 5

Lectures on the Diencephalon

EDITED BY w. BARGMANN AND J. P. SCHADB

V O L U M E 6

Topics in Basic Neurology

EDITED BY w. BARGMANN AND J. P. S C H A D ~

Page 8: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

Contents

Introduction to neurocybernetics N. Wiener and J. P. Schade (Cambridge, Mass. and Amsterdam) . . . . . . . . . . . . 1

Nerve-model experiments on fluctuation in excitability M. Ten Hoopen and A.A. Verveen (Amsterdam) , . . . . . . . . . . . . . . . . . . 8

The engineering approach to the problem of biological integration J. D. Cowan(Cambridge, Mass.) . . . . . . . . . . . . . . . . . . . . . . . . . . 22

The neuron as a synchronous unit P. L. Latour (Soesterberg, The Netherlands) . . . . . . . . . . . . . . . . . . . . . 30

Finite automata and the nervous system F. H. George (Bristol, Great Britain) . . . . . . . . . . . . . . . . . . . . . . . . 37

Interpretation of the mechanism of conditioned reflexes inhibiting the activity of a functioning organ A. Nigro (Messina, Italy) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Information processes of the brain A. V. Napalkov (Moscow) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Information and the brain J. Zeinan (Prague) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Informationspsychologie und Nachrichtentechnik H. Frank (Karlsruhe, Deutschland) . . . . . . . . . . . . . . . . . . . . . . . . . 79

Klassen und Eigenschaften von Lernmatrizen P. Muller (Karlsruhe, Deutschland) . . . . . . . . . . . . . . . 97

Sensory homeostasis G. W. Zopf, Jr. (Urbana, Ill.) . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Information sensorielle et information conceptuelle dans l’activite bio-electrique du cerveau E. Huant (Paris) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

J. L. Sauvan (Antibes, France) . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Modele cybernetique d’une memoire active a capacite d’accueil illimitee

Preliminary notes on a functional scheme of human thought N. Stanoulov (Sofia) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

Histology, histonomy, histologic V. Braitenberg (Naples, Italy). . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

A discussion of the cybernetics of learning behaviour G. Pask (Richmond, Surrey, Great Britain) . . . . . . . . . . . . . . . . . . . . . 177

L’obstacle de non-linearit6 en neurologie P. Nayrac (Lille, France). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

Page 9: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

VIII C O N T E N T S

Adaptive machines in psychiatry J. Clark (Bristol, Great Britain)

W. R. Ashby, H. Von Foerster and C. C. Walker (Urbana, Ill.) . . . . . . . . . . . . 236

. . . . . . . . . . . . . . . . . . . . . . . . . . 224

The essential instability of systems with threshold, and some possible applications to psychiatry

Mathematical theory of the effects of cell structure and of diffusion processes on the homeostasis and kinetics of the endocrine system with special reference to some periodic psychoses N. Rashevsky (Chicago, Ill.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

0. D. Wells (Hants, Great Britain) . . . . . . . . . . . . . . . . . . . . . . . . . 257

N. Wiener (Cambridge, Mass.) . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272

Irrelevance of a ‘nervous system’

Epilogue

Page 10: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

1

Introduction to Neurocybernetics

N . WIENER A N D J. P. SCHADE

Massachusetts Institute of Technology, Cambridge, Mass. (U.S.A.) and Netherlands Central Institute for Brain Research, Amsterdam (The Netherlands)

Cybernetics is the study of communication and control in machines and living organisms. Biocybernetics is that part of cybernetics in which living organisms are emphasized. In biocybernetics there are two fields which we must distinguish between, even though this distinction can not be made perfectly sharp. These fields are: neurocybernetics and medical cybernetics. The former is concerned with the pathways of action via sense-organs, neurons and effectors because of the fact that cybernetics is primarily concerned with the construction of theories and models. The symbols and hardware in neurocybernetics resemble more closely the elements of the nervous system and the sense-organs. Medical cybernetics is where homeostasis or the mainte- nance of the internal constant environment is the main consideration. There is no sharp distinction between these two fields, because changes in the level of equilibrium of our internal factors are unquestionably associated with changes in our external reactions. For example, our external reactions are dependent on chemical messengers such as the hormones, which belong to our internal environment. Even the more precise transfer of messages which takes place by nervous transmission has important chemical and even possibly hormonal factors.

The propagation of an action potential along the course of a nerve is an electro- chemical process, and the transmission of a message across a synapse is probably the same process but much more complicated. In both cases, changes in the general chemistry will effect the communication of nerves and we can not make an absolute sharp separation between these two fields of biocybernetics. However, one cannot take a complicated subject like biocybernetics and hope to treat it as a whole. It must be broken up into parts. In other words, analysis necessarily has an element of falsification. It is impossible to make any great progress without the division of bio- cybernetics into neurocybernetics, and what we have called medical Cybernetics.

Our tool for the study of a complicated system is the measurement of certain quantities associated with the system and the study of their mathematical relations. The older views of physics thought that it was possible to give a complete account of all the quantitative relations in a total system. The modern view is that some of these relations can not be completely described and can only be given in a statistical form, which tells us not what always happens but, withn a certain precision, what usually happens. It is immaterial which of these two views is taken in the study of biocybernetics

Page 11: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

2 N. W I E N E R A N D J. P. S C H A D B

because a complete account of all the quantities is far beyond our powers. The complexity of the brain is enormous, and all that we can measure is a minor part of what happens. The rest must be estimated on a statistical basis or not at all. Thus the mathematics of the nervous system and of cybernetics must be statistical. Not only should it be statistical but it must be essentially non-linear.

In a linear system, when we add inputs we add outputs. When we multiply an input by a constant, we multiply an output with that constant. Linear systems seem to be very common. In the swing of a pendulum we have a system which we can consider linear for small angles. Since strictly linear systems are almost non-existent, it is instructive to consider nearly linear systems. Now the distinction between the nearly linear system and the truly linear system is vital. In a strictly linear system, no two modes of oscillation can interact in any degree and a general balance between the modes of oscillation is impossible. In a nearly linear system there can be a transfer of energy from one mode to another. There are certain states of the system where very slow changes will produce an equilibrium. This is closely connected with the theory of adiabatic invariance of Ehrenfest.

A clock is an interesting highly non-linear system. We often think of the grand- father clock only in terms of the pendulum, but it also includes the weights, the train of gears moving and the pointers. Here we have a direct current input and a direct current load; the pendulum is the machine that regulates the time. There must be a flow of energy between the input and the output, but a flow of energy between a part of zero frequency and the hands of the pendulum involves non-linearity. Non- linearity of this general type occurs in cybernetic phenomena, and we must be very careful not to suppose that because a system is nearly linear it is linear truly.

In the nervous system with thresholds, limitations of amplitude and one-direction conduction, we have all the forms of non-linearity which you find in electrical systems with amplitude limiters, rectifiers and switches. The mathematics for the nervous system and for cybernetics must be a non-linear statistical theory.

Memory involves the storage of information, conditioning involves long time effects. These, like the sort of adiabatic invariance which Ehrenfest considers, involve long time intervals and are highly non-linear. If we attempt to make an electrical apparatus in which memory and conditioning play a role, we do not attempt to make all the memories and conditionings of one type. We have a large repertory of methods, some for short term memories and some for long term memories but are not confined to any one way of doing this task. This multiplicity of methods is probably the reason why we have found it so difficult to make any real headway as to the mechanism of the actual memory and conditioning of the nervous system. Caianiello (1960) at Napleihas presented a theory of the nerve net which is both statistical and non-linear. There is no question that a great amount of work is being done in these directions, and it seems that this will be one of the most fruitful techniques of cybernetics in the near future.

Cybernetics is not only the study of control and communication in man and machine, but also between man and machine. There has been a certain attitude against this relation, to be found particularly in some engineering circles which

Page 12: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N T R O D U C T I O N 3

involves the comparison of human performance with machine performance, to the disadvantage of humans. In a comparative study of human performance and machine performance, it must be realized that the human being does some things much better than the machine and some things worse. The human system is not as precise nor as quick as a computing machine. On the other hand, the computing machine tends to go to pieces unless all details of its programming are strictly determined. The human being has a great capacity for achieving results while working with imperfect pro- gramming. We can do a tremendous amount with vague ideas, but to most existing machines vague ideas are of absolutely no use. Precision and speed are valued by some engineers much more than the sum of all human qualities. This preference of machine over man displays a fundamental contempt for man and dislike for human values. However, the relation between the machine and man should not be conceived in terms of competition. The proper relation between man and machine is not that of competition, but in the development of systems utilizing both human and mechani- cal abilities.

An important future machine is the learning machine which can modify its pro- gramming by its own success or failure. Such machines already exist, for the playing of checkers for example, and can exist for many other purposes. These machines modify their programming by virtue of their success, and in order to engineer this one must know what success means. In a machine for playing a game, success means winning the game according to certain rules. Although the other player is part of the conditioning system, the judgement of success is very objective. If, however, you want to make a translating machine, success consists in the actual intelligibility of the translation to a reader. In the success determination of the translation machine a human element must be utilized. It is conceivable to give a purely mathematical account of this, but it won’t be easy.

The nervous system is unbelievably complex, particularly if we examine a great mass such as the cerebral cortex. There are various ways of analysing the fine structure of the nervous system, ways which are largely incompatible with one another in detail and yet must be put together to give a complete picture. If we are to see the individual neuron in clarity, one of the best ways is the Golgi staining process. But this method stains only a few neurons out of the total mass, and still the question is whether the selection is at random or not. It gives us, however, a good opportunity to study the organization of the dendritic plexus. The classical Nissl methods stain cell bodies of neurons and glial cells but do not reveal anything but very short pieces of dendrites and axons. The electron microscopist cuts the sections so thin that he has only small scraps of neurons and pieces of processes so that the relationships in which he is interested are lost.

Staining techniques such as the silver methods can be employed in much thicker sections, the disadvantage being that a bewildering mass of closely interwoven fibre is presented which makes analysis of the connections impossible. A number of these difficulties can be overcome by using combinations of histological methods such as the Nissl and the Golgi technique, the electron microscopic and Golgi technique. In general we are left with the extraordinary difficult task of putting together pieces

Page 13: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

4 N. W I E N E R A N D J. P. S C H A D B

of evidence obtained in various ways. This explains partly why our knowledge of the relationship between neural structure and function is rather obscure.

Neurophysiology can analyse only by partial destroying. If a micro-electrode is put in the nervous system we may get locally detailed results, but we have also produced a small trauma. The inability to study the nervous system without producing a trauma is analogous to the inability in quantum theory to study a physical phe- nomenon without producing a disturbance in the phenomenon. This observation is not an innocent and detached act.

In discussing neurophysiology and neuroanatomy an essential point comes into focus : the difference between long myelinated fibres and the short unmyelinated axons. The long fibres tend to conduct an impulse in an all-or-none way as a series of spikes. The spike if it occurs has a constant shape. On the other hand, a spike may fall out or not appear at all at the end of the fibre in the teledendron. This is taken by many to be the normal mode of behaviour in the nervous system, including the short fibres as well. Here the complicated effects of one neuron with its neighbouring neurons are left out, but the old classical view of the spike is taken. In order that this chain of reactions assumes a permanent form a certain headway, both in time and in distance, is needed. Is it possible that enough headway exists in the short fibres? Moreover, the short fibres are not uniform in cross-section and the velocity is subject to changes. Frankly, there is much doubt whether the all-or-none principle applies at all to the very short fibres of the cortex.

I have said that the neurophysiologist and neuroanatomist must produce trauma and destruction in order to see. However, their particular type of destruction is one which is relevant to their problems. There are other types of analysis which are necessary for the cyberneticist: analysis of behaviour. In this there is also destruction, as a matter of fact it is quite literally destruction. Animals can be wrecked completely by their conditioning. There is also the intellectual destruction of the treatment of the conditioned reflex as a whole in itself, without the consideration of the entire animal’s past. Here is a falsification necessary for analysis in another form.

It can’t be said which way is right or wrong. The methods of behaviourists, neuro- anatomists and physiologists must be united in order to approximate the large scale of cybernetic views. These many modes of analysis make the training for the cyber- neticist very complicated. He needs the talent of a neurophysiologist, neuroanatomist, mathematician, physicist, behaviourist and sensory psychologist. One thing of im- portance is that you can not hope to get people of these different disciplines to produce cybernetic work merely because they are brought together. They must under- stand language, methods and thoughts of the others.

This volume contains the proceedings of the Symposium on Cybernetics of the Nervous System held at the Royal Dutch Academy of Sciences in April 1962. We have tried to organize a multidisciplinary symposium around three main subjects : nerve, brain and memory models. The multidisciplinary character was emphasized by inviting neurologists, psychiatrists, biologists, engineers, mathematicians and physi- cists to cover some of the many intriguing problems and theories in this field of

Page 14: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N T R O D U C T I O N 5

bionics. Although not every aspect of bionics is covered, we were very fortunate to have such an outstanding group of scientists together.

Ten Hoopen and Verveen describe experiments on isolated nodes of Ranvier. A comparison with studies on models was made to elucidate the phenomenon on fluctuation in excitability. This is the property of a neural element to respond to a non-random input with a certain probability. This phenomenon was considered to result from a form of biological noise. The results from animal experiments and model studies showed a very close resemblance. In both it was found that the relation between probability of response and stimulus intensity approximates the Gaussian distribution function.

In his paper on the engineering approach to the problem of biological integration, Cowan from McCulloch’s laboratory at the Massachusetts Institute of Technology, discusses the necessity of using many-valued logics and/or information theory where noisy units and noisy connections are given. He demonstrates that redundant com- puters exhibiting arbitrarily low frequencies of error could be constructed so that they are not completely redundant but process a finite fraction of information. A very interesting result was that such a computer need not be precisely connected and that a certain bounded fraction of errors in connection might be tolerated. These results may lead to an interesting application to the construction of mathematical models of cortical structure.

The sequential behavior of a set of idealized neurons is the subject of Latour’s paper. He investigated nerve nets with mathematical methods similar to those in the theory of linear sequential networks. By assuming that this approach could also be valid to describe cortical events he explains in a simple way the periodicities found in reaction time experiments.

George’s paper is a wonderful example of a lucid account on the comparison of finite automata and the nervous system. His lecture is primarily concerned with the logical net type representing a sort of idealized or conceptual nervous system. He describes the extent to which these automata can be made to fit the facts discovered by neurophysiologists and neuroanatomists. Also a number of general suggestions is made to try to bring the conceptual nervous system more into line with the existing empirical facts.

A different approach was employed by Nigro who gives a scheme of the cerebIal cortex mainly based on the data of Bykov’s school. He gives a theoretical interpretation of the mechanism of conditioned reflexes whose action is made possible by evoking a pattern that is already well established in the brain structures.

Napalkov represented the cybernetic group of the Moscow State University. In his paper he describes an analysis of some complex forms of brain activity on the basis of a study of the information processes. Besides his theoretical studies he also deals with the cybernetic explanation of pathologic phenomena such as hypertension.

An interesting feature of neurocybernetics is that work is being done at so many different levels. This is particularly true of Zeman who, in his paper, is mainly con- cerned with linking together the symbols in the contents of a language, informational machine language translations and the process of registration, elaboration and

Page 15: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

6 N. W I E N E R A N D J. P. S C H A D B

transmission of information in the brain. This whole process of what he calls ‘brain writing’ is, as he assumes, closely connected with the creation of ‘brain fields’, where temporary phenomena are transformed into symbols, similarly as in the pictures of visible speech or in Chladni’s sound images. He assumes then that every brain field is connected with a certain excitation energy and a certain informational content which manifests itself as a specific semantic spectrum. A mathematical model is drawn up to connect the results of brain physiology with mathematics and linguistics.

Steinbuchs active group at the Institut fur Nachrichtenverarbeitung und Nach- richteniibertragung at the University of Karlsruhe was represented by Frank and Miiller. Information psychology and telecommunication were tied together by Frank using conceptual and perceptual models. His ideas are nicely illustrated by the learning matrix for pattern recognition processes. A description is given of a device which works invariantly to translation, affined transformation and skewness.

Muller is particularly discussing the characteristics and patterns of learning matrices. A learning phase and a knowing phase can be distinguished in the learning matrix. These two phases operate consecutively; when the learning phase is completed, the knowing phase starts during which the previously learned characteristics of the particular pattern presented to the matrix are indicated by a maximum detection device. The construction and some applications are discussed showing its simple use in many respects.

The following two papers are concerned with the input relations and homeostasis of sense-organs. Zopf discusses more from the physiological and anatomical point of view the cybernetic significance of: (a) the innervation of the accessory motor apparatus of the sensory systems, and (b) the efferent fibres ending on or near the receptor cells. The first class may play a role in attenuating or normalizing the intensity dimension of the stimulus which of course is an interesting hypothesis. The second class, the actual existence has always bothered the neuroanatomists, was discussed in a relation to stimulus configurations. The utility of this efferent sensory control lies mainly in the direction of exploiting stimulus constraints (input redundancy).

Huant approaches the problem of sensory information more from the psychiatric point of view and draws a close correlation between the interaction of sensory information with information already obtained in the brain on the one hand and the a-rhythms of the electroencephalogram on the other hand. It seems likely from many other investigations that this relationship is not at all very clear. The elucidation of a few of the many intriguing problems existing in the basic mechanism of memory and thinking should be one of the main goals of neurocybernetics.

The papers of Sauvan and Stanoulov are concerned with the construction of models explaining some of the parameters of human memory and thinking.

The research of Braitenberg combines classical neurohistology with the logic of nerve nets. He proposes a method for the functional analysis of the structures of grey substances in the brain. The approach is comparative, using techniques such as the Golgi staining method and a quantitative assessment of myelin. His method has already proved to be useful in an analysis of the cerebellar cortex. He was able to

Page 16: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N T R O D U C T I O N 7

describe the structure of the molecular layer of the cerebellum into a functional scheme representing a clock that will translate distance into time intervals and vice versa.

Cybernetic models of learning and cognition in evolutionary systems and brains are extensively discussed by Pask. In his review he gives a beautiful survey of the models and the conception of learning. His model illuminates among many other properties the role of ‘distributed’ or ‘non-localised’ functions and it possesses an interesting asymmetry which seems to account for some of the peculiar discontinuities in behavior associated with ‘attention’ and ‘insight’.

The following five papers cover some of the neurologic and psychiatric implications of cybernetic models.

Nayrac gives a survey of his ideas on the problems of non-linearity in clinical and experimental neurology. Adapted machines and their possible use in psychiatry are discussed by Clark. He suggests the use of an adapted machine in the form of a game taught to the patients by a Pask teaching machine as a possible source of objective diagnostic measurements in psychiatric patients.

The paper of Ross Ashby and coworker is concerned with the relevance of the essential instability of systems that use threshold.

Rashevsky’s lecture bridges mathematical biology, homeostasis and kinetics of the endocrine system and some psychiatric aspects.

Wells does not believe in the dominant role played by the nervous system in many respects. The thesis of his paper is that the nervous system develops late in evolution and that the central nervous system was not essential until rapid organism-initiated movement led to refinement of ‘distance receptors’. One of his main points is that most of the essential organisation to be studied was developed prior even to the autonomic system, way down in the evolutionary scale. He puts forward an interesting hypothesis that ‘this organisation involves a mapping of the set of chromcjsomes onto the total organismand the mapping of the set of surviving organisms into the set of chromosomes’.

Page 17: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

8

Nerve-Model Experiments on Fluctuation in Excitability

M. T E N HOOPEN A N D A. A. VERVEEN

Institute of Medical Physics TNO, National Health Research Council, Utrecht and Central Institute for Brain Research, Amsterdam (The Netherlands)

I N T R O D U C T I O N

A nerve fibre stimulated with identical electrical rectangular pulses of about-threshold intensity responds with an action potential in a fraction of all trials. This phenomenon, the fluctuation in excitability, viz. the property that in the threshold region the fibre responds to a non-random, fixed, input with a certain probability, reveals the existence of a noise factor in excitation. This is an endogenous property of the fibre, as can be concluded from the mutual independency in reacting of fibres in a two-fibre prepa- ration upon application of the same stimulus (Pecher, 1939).

It was shown (Verveen, 1960, 1961) that upon low frequency stimulation (once per 2 sec) with identical stimuli :

(1) the successive reactions have each time the same probability of occurrence, independent of the preceding reactions;

(2 ) the relation between the probability of response and stimulus intensity approxi- mates the Gaussian distribution function ;

(3) both parameters of this function, the threshold (the mean) and the spread (the standard deviation) are dependent on the stimulus duration : the (50 %) stimulation threshold is related to the stimulus duration according to the strength-duration charac- teristic; the coefficient of variation, the quotient of spread and threshold, called the relative spread (RS), proves to be independent of the stimulus parameters and about equal for short (0.25 msec) and long (2.5 msec) pulses;

(4) the fluctuation in excitability is also present during the recovery period, during a sub-rheobasic current and after the application of strychnine and urethane; only the parameters of the probability-intensity relation undergo a change;

(5) the RS, the measure of the width of the threshold range relative to the value of the threshold, is related to the fibre diameter: the smaller the fibre, the larger the RS (Verveen, 1962).

MODEL EXPERIMENTS

One of the possible sources of threshold fluctuation (cf. Frishkopf and Rosenblith, 1958) might be given by local statistical variations of the membrane potential due to

Page 18: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N IN E X C I T A B I L I T Y 9

thermal agitation noise (Pecher, 1939; Fatt and Katz, 1952). In this case the RS might be a measure of the effective membrane noise potential relative to the threshold membrane potential (Verveen, 1962).

As long as we are not able to study the proper biological noise directly, in the above mentioned case for instance as slight disturbances of the resting membrane potential (Brock et al., 1952), additional information may be obtained in an indirect way, assuming that a voltage noise is responsible for the phenomenon.

To this end mode1 experiments were carried out with a twofold intention. First to gain an understanding of the influence of noise on a triggerable device. Second to develop methods allowing a more effective study of the processes occurring in the nerve fibre.

Work is also in progress on a mathematical model. The central problem is related to the axis-crossing problem, a topic well-known in the field of information and detection theory. For the case in question the difficulty arises in the form of a time- dependent function. In earlier studies on the theoretical interpretation of threshold measurements in the presence of noise of excitable tissue only the amplitude distri- bution (mostly assumed as Gaussian) of the random function and not the rate of change (the frequency spectrum) of the variable seems to have been at stake (Ra- shevsky, 1948 ; Hagiwara, 1954). Recently Viernstein and Grossman (1960) paid at- tention to the necessity to involve this aspect of the process in computations. For reasons of simplicity we desired to start with a simple analogue device, general

impression at this stage of the study being more valuable than numerical solutions on complex models. More intricate problems, for instance in connection with the local potential andits fluctuation (Del Castillo and Stark, 1952) are not yet considered either.

This communication is, therefore, not a presentation of a rounded off investigation. We are just gaining some understanding of the behaviour of a triggerable device in the presence of noise.

Harmon's electronic neuron model (Harmon, 1959) provided the triggerable device. The stimulus, delivered by a Tektronix pulse generator, was fed into it via a network transforming the stimulus in the same way as it apparently occurs in an arbitrary functionally isolated single frog node of Ranvier (A-fibre, situated in intact sciatic frog nerve), one of the neural elements investigated in the previously mentioned ex- periments. This transformation is deduced from the time course of excitability after applying a non-effective constant stimulus of long duration and from the strength- duration relation.

The excitability cycle after the application of a constant stimulus at t = 0 is of the form:

f ( r ) = exp (- r / t l ) - exp (- r / n ) with T I > "2.

For a rectangular stimulus of finite duration T, starting at t = 0, this stimulus-trans- forming function can be written as:

f(t)forO < t < T,

f ( t ) - f ( t - T ) for t > T.

References p. ZOjZ l

Page 19: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

10 M. T E N H O O P E N A N D A. A. VERVEEN

The resulting excitability cycle and strength-duration relation of this model are compa- rable to those of the frog node studied in the situations mentioned above.

The rest of the model consists essentially of a monostable multivibrator. As soon as the voltage at the input reaches a critical value a pulse is given off. On the internal threshold of the device noise of a verifiable quality is superimposed. The noise is initiated in the model via a suitable network and a band-pass filter (Krohn-Hite; max. 20-20,000 c/s; slope 24 db/octave) by a white noise generator (Peekel; 20-20,000 c/s). In the experiments reported hereafter band-limited white noise of an adjustable in- tensity and frequency spectrum was used.

No recovery period was involved in these experiments, though another version of the model with a supra- and a second subnormal phase in the relative refractory period is ready for further investigation. Intentionally the duration of the recovery period was made much shorter in the model than it is in the fibre, thereby allowing to test the model with higher frequencies (intervals between the stimuli 160 msec). The lengthy recovery period of the nerve fibre requires a stimulation frequency not higher than once per 2 sec. Otherwise cumulative effects will complicate the inter- pretation of the experimental results.

RESULTS

The first series of observations on the model were limited to noise with a frequency spectrum of 20-1000 cjs. It followed that the behaviour of the model, on the whole, is about equal to that of the nerve fibre: (I) the existence of a threshold region (Fig. 1); (2) the relation between probability of response and stimulus intensity, with given stimulus duration, can be approximated by a Gaussian distribution function ; (3) both parameters, the threshold and the spread, depend on stimulus duration (Fig. 2). As

Nerve fibre

I I I I I I I I I I I I

...... I I I

Model

.. .... .

... . . . . L L L L L l h J L

100 102 stimulus intensify

98

Fig. 1. The relation between probability of response and stimulus intensity, in percentage of the threshold. Stimulus given every 2 sec.

Page 20: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N I N E X C I T A B I L I T Y 11

mentioned earlier for the nerve fibre the RS is the same for short and long pulses. For the model this is the case under certain conditions of the noise characteristics only. This property will be discussed later; (4 ) the relation between probability of response and stimulus duration - for intensities which are different but fixed each

Nerve fibre Model

10G

5c

C

lo(

a, 2 5(

E - c, c a, Q

1

i! 12.0 126 13.0 13.5 14.0 145

- v €52

29,O 29.5 30.0 30.5 31.0 31.5 32.0 - v

Slimulus duration ' 25msec . 0.25msec Stimulus duration ' o'20msec

o lOmsec

Fig. 2. The relation between probability of response and stimulus intensity for two stimulus durations (A, B) and the same relations after standardization of the threshold (C).

time -has the same properties (Fig. 3): the curves are asymmetrical to the right and steeper at higher intensities. At rheobasic stimulus intensities a percentage of 100 is never reached.

It appeared, therefore, that the agreement between the model experiments and our physiological experiments is satisfactory. The question, however, is whether the model can be used as a tool allowing a more effective study of the processes occurring in the nerve fibre.

Together with the fluctuation in excitability a variation is seen in the time interval References p . 20121

Page 21: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

12 M. TEN HOOPEN A N D A. A. V E R V E E N

between the initiation of the stimulus and the passage of the eventual action potential in the part of the nerve fibre below the recording electrodes. This fluctuation in re- sponse time occurs at the point of stimulation as a variation in latency. It is inde-

A Nerve fibre B Model

101

193 144 111

0 05 10 15 20 25 30 msec m sec

Fig. 3. Duration-probability curves. Stimulus intensity in percentage of the rheobase.

pendent of the site of the recording electrodes on the nerve fibre (Blair and Erlanger, 1933) and probably due to the different instances of time at which the process of

Nerve fibre Model

70- A1

60-

5 0.

4 0.

30.

40 :I;’, 4 ,k16 l C 2 10

rn sec O O

Fig. 4. Latency-distribution histograms. Stimulus intensity in percentage of rheobase: A1 : 110.0; B1: 100.3; C1: 99.7; A2: 103.0; B2: 100.8; C2: 99.8. Numbers beside histograms indicate the per- centages of response. Stimulus durations in all cases 10 msec. For the nerve fibre the latency includes

the time of conduction over a length of 12 cm (frog A-fibre).

Page 22: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N I N E X C I T A B I L I T Y 13

excitation, initiated by the stimulus, triggers the nerve, because of its fluctuating ex- citability (Erlanger and Gasser, 1937).

The same phenomenon is present in the model. An orientating investigation of the latency distributions in the model upon stimulation with a pulse of long duration (10 msec) revealed the following characteristics (Fig. 4). At higher stimulus intensities the mean latency between stimulus and response is shorter and the dispersion is smaller. The histograms are asymmetrical to the right, in particular at low intensities of the stimulus.

Investigation of a frog nerve fibre (also with 10 msec duration pulses) showed that the characteristics predicted by the model are present (Fig. 4).

It will be noted that these latency distributions bear a close resemblance to the interval distributions studied by Buller et al. (1953) and by Hagiwara (1954) for the muscle spindle, by Grossman and Viernstein (196 1) for slowly adapting, spontaneously discharging neurons of the cochlear nucleus and by Amassian et al. (1961) for the spontaneous activity of neurons in the reticular formation of the midbrain.

It is probable that this phenomenon allows a closer investigation of the noise

Fig. 5 . Probit transformation of a Gaussian distribution function (Finney, 1952).

References p . 20121

Page 23: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

14 M. TEN H O O P E N A N D A. A. VERVEEN

characteristics. As yet no model studies were made on the relations between different modes of noise and the latency distributions.

By way of convenience the probability intensity function was chosen as a means to study the influences exerted by different modes of noise and to compare the behaviour of the model to that of the nerve fibre in this respect. In particular the RS is a surveyable measure.

The procedure used is the probit analysis (Finney, 1952). This analysis is applied to each set of data covering the relation between probability of response and stimulus intensity. The method is based on a transformation of the Gaussian distribution function in such a way that the function is made linear. The manner in which per- centages are converted into probits is illustrated in Fig. 5.

__

7 , ,

{ 20-2000

/. , , , , ,i

20-500 [ :m 20-200 [ i-1 m-wo 1 -;---;/ -1 __

hblse spwtrum 1 26 24 22 20 18 16 14

Probit steps A

Fig. 6. Probability-intensity relations after the B C

probit transformation. Stimulus durations: 0.2 mscc (A), 1.0 msec (B) and 10 msec (C). The value of one step is 1 of the threshold stimulus intensity. The band width of the noise frequency spectrum is indicated on the left in cps. Stimulus transforming function: 1 - exp (- t i t ) with t ~ 0.5 msec. Standard noise intensity (for explanation see text).

Page 24: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N I N E X C I T A B I L I T Y 15

In this transformation the inverse of the standard deviation appears as the slope of the transformed function. Owing to the technique used, viz. plotting the intensity on the abscissa in units of the threshold, the reciprocal of the slope is not an estimate of the standard deviation but of the coefficient of variation, the RS.

Each set of data was obtained in the following way. The stimulus intensity is ad- justed to such a value that upon stimulation the model reacts with a probability of about 50%. This value is then an estimate of the threshold. Hereafter, the stimulus intensity is varied in steps, each step being equal to a given percentage (1 %) of the

200-200 { j.---+-

F-71

3 0 2 8 2 6 2 4 2 2 2 0 18 16 14 steps

i-i

/ . I 9 5 0 2 8 8 2 6 2 4 2 2 2 0 1 8

Steps

Fig. 7. Probability-intensity relations after the probit transformation. Stimulus durations: 0.2 rnsec (A), 1.0 rnsec (B) and 10 msec (C). The value of one step is 1 % of the threshold stimulus intensity. The band width of the noise frequency spectrum is indicate4 on the left in cps. Stimulus transforming function: 1 -exp (- t i t ) with t = 0.5 msec. Noise intensity 5 db below (Al, B1, C1) and 5 db

above (A2, B2, C2) standard intensity.

threshold value. The threshold range is scanned in this way, while a certain number (100) of stimuli is given at each step; the total number of reactions per step is noted and converted into percentages and probits. After this transformation the estimates for threshold and RS were determined graphically. The estimates obtained for the RS were not corrected for the shift in the (50%) stimulation threshold following a change of the noise level or the frequency spectrum. This omission does not alter the estimates of the RS significantly (less than 10%).

The results of the measurements concerning the intensity-probability relations for different noise intensities and noise frequency spectra are presented in Figs. 6, 7 and 8.

Figs. 9 and 11 give a graphical summary of the estimates of the RS, while Fig. 10 gives an example of the degree of reproducibility of the measurements. Several values References p . ZO/Z/

Page 25: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

16 M. TEN HOOPEN A N D A. A. V E R V E E N

Steps Steps

Fig. 8. Probability-intensity relations after the probit transformation. Stimulus durations : 0.2 msec (A), 1.0 msec (B) and 10 msec (C). The value of one step is 1 ”/, of the threshold stimulus intensity. The band width of the noise frequency spectrum is indicated on the left in cps. Stimulus transforming function: exp (- ?/TI) - exp (- t / t z ) with T I = 6.7 msec and T Z = 0.5 msec. Noise intensity 5 db

below (At, B1, C t ) and 5 db above (A3, B3, C3) standard intensity (A2, B2, C2).

of the upper limit of the noise frequency band width were examined. For ease of survey only the results for 20,000,2,000 and 200 cps are reproduced in Figs. 9 and I 1.

Three intensities of the noise were used: a so-called standard intensity and two other intensities 5 db above and below the first respectively. The root mean square (r.m.s.) value of the noise amplitude relative to the internal threshold is about 0.005 in the case of the standard intensity and for a frequency-band of 20-2,000 cps. The r.m.s. values for the other intensities and for the same frequency band are then 77 :<

Page 26: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N I N E X C I T A B I L I T Y

0.04.

0.03-

0.02.

17

0.01-

0

Stimulus durotion 0.2 msec

A 1.0 msec o 10 msec

A

0 A

0

OQ 0 ' j

0.02-

0.01-

B 0 . ~ ~ 1

O.O21

~

0 A

A

0.014

0

A

0

0

2 20 (xi031 Noise spectrum (cps)

Fig. 9. Relative spread, RS, in relation to stimulus duration, noise intensity and noise frequency spectrum. Stimulus transforming function: 1 - exp (- t i t ) with t = 0.5 msec. Noise intensity 5 db

above (A) and 5 db below (C) that of the standard intensity (B).

greater, respectively 44 0.2, 1.0 and 10 msec.

less than this value. Three stimulus durations were used :

Figs. 6, 7, 9 and 10 have been obtained with a stimulus transforming functionf(t)

Stimulus duration I] 0.2msec

10 msec

RS

0 . ~ ~ 1 0

Noise spectrum fcpsl

Fig. 10. As Fig. 9B to show the degree of reproducibility.

References p . 20121

Page 27: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

18 M. TEN H O O P E N A N D A. A. VERVEEN

Stimulus duration 0.2 msec

A 1.0 msec o 10 msec

0

0.0;

0.01

A

8,

0

A

of the form: 1 - exp (- f / t ) with z = 0.5 msec. This case, comparable to a non- accommodating nerve fibre, was most extensively used in order to have more easily interpretable results.

0.2 2 20 ( X 103) 0 "'"7 A

0

0

0

n ni I C

0 014 0 A

B 0

2 20 1x103) 0 , o r !

0.2 2 20 1x104 0.2 Nolse spectrum icpsl Noise spectrum Icps)

Fig. 11. Relative spread, RS, in relation to stimulus duration, noise intensity and noise frequency spectrum. Stimulus transforming function: exp (- t i n ) - exp (- f i r z ) with T I = 6.7 msec and T Z 7 0.5 msec. Noise intensity 5 db above (A) and 5 d b below (C) that of the standard value (B).

For Figs. 8 and 11 the functionf(t) is given by: exp (- ?/TI) - exp (- t / z& with tl = 6.7 msec qnd z2 = 0.5 msec. This case corresponds with the before-mentioned frog A-fibre.

It can be seen, that changing the upper frequency limit of the band width, the lower being fixed at 20 cps for a stimulus of a fixed duration, has the following influences: ( I ) with increased band width the threshold decreases; (2) the RS increases, at least for lower values of the upper frequency limit of the band width.

When stimuli of different durations are compared it is noted that the decrease of the threshold with increasing band width is more pronounced for long than for short stimuli, whereas the change of the RS is more pronounced for short than for long stimuli. Both effects are more marked for noise of higher than for noise of lower intensity.

It can be concluded that noise of increasing band width and intensity: ( I ) renders a triggerable unit more excitable, an effect more pronounced for long duration stimuli; (2) increases the width of the threshold region (relative to the threshold), an effect more pronounced for short duration stimuli.

Page 28: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N I N EXCITABILITY 19

D I S C U S S I O N

The effect of noise intensity on the stimulus threshold and the RS is not demonstrated in the nerve fibre because threshold measurements on the same nerve without fluctu- ations are impossible. It is conceivable, however, to change the biological noise artificially in a predictable way without altering the fundamental ignition mechanism.

Strychnine is known to increase the threshold range (Erlanger ct al., 1941). A study of the influence of strychnine on the fluctuation region of frog A-fibres has revealed that the RS increases, while no influence on the threshold was found (Verveen, 1961).

In this case only short duration stimuli were used (0.12 msec). According to the results of our model experiments one should expect these results and furthermore that upon stimulation with pulses of long duration a decrease of the threshold will be present after treatment with strychnine. Experiments to detect whether or not the strychnine-treated nerve will exhibit this phenomenon upon stimulation with long pulses as predicted by these model experiments are planned.

The independency of the RS of stimulus duration, found for the nerve fibre, gives an indication of the nature of the noise.

From Figs. 9, 10 and 1 1 it is seen that with certain combinations of noise intensity and noise-frequency band the RS for the model is indeed the same for the three stimulus durations tested, within the experimental and compilation errors. So far as band-limited white noise is concerned, this means that the rate of fluctuation of physi- ologically effective noise does not exceed 2,000 cps and the upper limit may very well be about 500 cps.

Frishkopf (1956), in his experiments on first order auditory neurons with the use of externally added noise, arrives at a minimal number of 2,000 states per second. Viernstein and Grossman (1960) accepted a number of 1000 states per second in their study on neural discharge patterns.

These values are in agreement with the conclusion reached above from the com- parison of our mcdel and nerve fibre experiments. Furthermore, the idea that very brief fluctuations of membrane potential have probably little effect, is advanced by Buller rt al. (1953).

Until now the lower frequency limit of the noise studied in the model has been kept constant at 20 c/s. Further refinement is possible by changing this lower limit also. This cannot yet be done because there is another, striking but somewhat disturbing, agreement between nerve fibre and model: a long term instability, or drift, of the threshold. Its causes are not clear. The nerve fibre is a biological preparation and its properties are thereby subject to slow changes in time (metabolic changes, deteriora- tion). The threshold is furthermore sensitive to changes in temperature, so that the experiments should be made in a strictly temperature-controlled environment. The transistors in the model are sensitive to thermal influences too. These were somewhat compensated by working in a thermally stabilized room. A long term drift of the stimulating apparatus may also come into play.

These circumstances make it necessary to work with short samples and consequent- ly, to accept rather large variances in the estimates of threshold and RS. These can be Refrrences p . 20121

Page 29: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

20 M. TEN HOOPEN A N D A. A. VERVEEN

reduced by increasing the rate of sampling, which was practicable for the model as described before. It is not possible to do this with the nerve fibre, because of its long recovery period.

Our observations on a simple nerve model with noise superimposed on the threshold and its comparison with the behaviour of the nerve fibre upon stimulation have thus revealed a number of interesting features and problems, warranting a further study on the nerve fibre (and on the model).

S U M M A R Y

In summarizing our work it is concluded that a simple nerve model describes the intensity-probability and duration-probability relations as encountered in the nerve fibre quite satisfactorily. The general characteristics of the latency-distributions as predicted by the model appear to hold for the nerve and they indicate a means of further study of ‘noise’ characteristics in nerve fibres.

Observations on the behaviour of the nerve model with noise of different intensities and frequency band widths revealed that noise of increasing intensity and band width renders a triggerable unit more excitable, an effect more pronounced for long duration stimuli, and increases the width of the threshold region (relative to the threshold), which is more pronounced for short duration stimuli.

The implications of these findings with regard to the actual nerve fibre are discussed.

R E F E R E N C E S

AMASSIAN, V. E., MACY, JR., J., AND WALLER, H. J., (1961); Patterns of activity of simultaneously recorded neurons in midbrain reticular formation. Annals of the New York Academy of Sciences,

BLAIR, E. A,, AND ERLANGER, J., (1933); A comparison of the characteristics of axons through their individual electric responses. American Journal of Physiology, 106, 524-564.

BLAIR, E. A., AND ERLANGER, J., (1935/36); On excitation and depression in axons at the cathode of the constant current. American Journal of Physiology, 114, 317-327.

BROCK, L. G., COOMBS, J. S., AND ECCLES, J. C., (1952); The recording of potentials with an intra- cellular electrode. Journal of Physiology (London), 117, 431-460.

BULLER, A. J. , NICHOLLS, J. G., AND STROM, G., (1953); Spontaneous fluctuations of excitability in the muscle spindle of the frog. Journal of Physiology (London), 122, 409-418.

DEL CASTILLO, J., AND STARK, L., (1952); Local responses in single medullated nerve fibres. Journal of Physiology (London), 118, 207-215.

ERLANGER, J., BLAIR, E. A., AND SCHOEPFLE, J. M., (1941); A study on the spontaneous oscillations in the excitability of nerve fibres, with special reference to the action of strychnine. American Journal of Physiology, 134, 705-718.

ERLANGER, J. , AND GASSER, H., (1937); Electrical Signs of Nervous Activity. Philadelphia, Pa., Uni- versity of Pennsylvania Press.

FATT, P., AND KATZ, B., (1952); Spontaneous subthreshold activity at motor nerve endings. Journal off’hysiology (London), 117, 109-128.

FINNEY, F. J., (1952); Probit Analysis. Cambridge, Mass., Cambridge University Press. FRISHKOPF, L. S., (1956); A probability approach to certain neuro-electric phenomena. Research

Laboratory of Electronics, Massachusetts Institute of Technology, Technical Report 307. FRISHKOPF, L. S., AND ROSENBLITH, W. A,, (1958); Fluctuations in neural thresholds. Symposium on

Information Theory in Biology. H. P. Yockey, Editor. London and New York, Pergamon Press (p. 153).

89, 883-895.

Page 30: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F L U C T U A T I O N 1 N EXCITABILITY 21

GROSSMAN, R. G., AND VIERNSTEIN, L. J., (1961); Discharge patterns of neurons in cochlear nucleus.

HAGIWARA, S., (1954); Analysis of interval fluctuation of the sensory nerve impulse. Japanese Journal

HARMON, L. D., (1959); Artificial neuron. Science, 118, 72-73. PECHER, C., (1 939); La fluctuation d’excitabilite de la fibre nerveuse. Achives internationales de Physi-

RASHEVSKY, N., (1948); Mathematical Biophysics. Chicago, University of Chicago Press. VERVEEN, A. A., (1960); On the fluctuation of threshold of the nerve fibre. Structure and Function of

the Cerebral Cortex. D. B. Tower and J. P. Schade, Editors. Proceedings of the Second Inter- national Meeting of Neurobiologists, Amsterdam, 1959. Amsterdam, Elsevier Publishing Company (p. 282).

Science, 131, 99-101.

of Physiology, 4, 234-240.

ologie et de Biochdmie, 49, 129-152.

VERVEEN, A. A,, (1961); Fluctuation in Excitability. Thesis, University of Amsterdam. VERVEEN, A. A., (1962); Fibre diameter and fluctuation in excitability. Acta Morphologica Neerlando-

Scandinavica, 5, 79-85. VIERNSTEIN, L. J., AND GROSSMAN, R. G., (1960); Neural discharge patterns and the simulation of

synaptic operations in transmission of sensory information. C . C . Cherry, Editor. Fourth London Symposium on Information Theory, September, 1960. London, Butterworth and Co.

Page 31: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

22

The Engineering Approach to the Problem of Biological Integration

J. D. COWAN

Research Laborafory of Electronics, Massacliusert.s Institute of Technology, Canibriclge (U.S.A.)

It has been estimated that something like 4 x I O l 9 bits per sec are processed in macromolecular synthesis, and that macromolecular systems containing blueprints for future development are produced by cells and bacteria, at error-rates of the order of 10-10 per bit (Quastler, 1957). It has also been estimated that the actual error-rate per molecule involved in macromolecular synthesis is of the order of 10-z (Pauling, 1960). Assuming a requirement of 5 bits per molecule, we are led to the conclusion that the system error-rate is much lower than the component error-rate. This suggests that within the macromolecular system there exist various integrating mechanisms that effectively control malfunctions in one way or another, to produce a reliable system.

The vertebrate central nervous system apparently displays similar integrative features. It has been estimated that some 2 x IOl3 bits per sec are involved in signal processing by approximately 1O1O neurones. Each of these neurones receives many inputs, and apparently computes some fairly specific function. It is not known how reliable this computation is, but estimates of the intrinsic fluctuations of peripheral nerve fibres (Verveen, 1960) suggest an error-rate of per bit as a not unreasonable figure. In addition, neurone interconnectivity does not appear to be rigidly controlled, and a certain amount of local randomness appears to exist in many cortical areas. Thus large parts of the cortex appear to comprise heterogeneous populations of units whose interconnections are not completely determined, and whose function is not completely error-free. The average error-rate of the cortex as a whole is something that is difficult in the extreme to determine, and no estimates of this number are known to the author. There is, however, some evidence that the cortex as a whole exhibits an error-rate considerably less than per bit (McCulloch, 1960). This suggests that there exist, at this level of organisation, mechanisms for the integration of nervous activity, that produce reliable behaviour, i.e. both molecular and neural populations apparently contain mechanisms that ensure stability of output despite various perturbations of the system (Weiss, 1959, 1962).

Various models have been devised to account for these properties. In dealing with macromolecular synthesis, Matthysse (1959) has shown how the mere complexity and

Page 32: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

B I O L O G I C A L I N T E G R A T I O N 23

the number of possible chemical pathways provide an essential shock-absorber against perturbations of the macromolecular synthesising system. Similarly, Ashby (1950) in a consideration of the properties of randomly assembled nerve nets or automata found that stability was exhibited by such nets provided they possessed a moderate degree of multiplicity of interconnections, and provided the components were not all alike, A somewhat similar approach was made by Cragg and Temperley (1954) who stressed the importance of having a non-specific organization. They constructed a cooperative model for neural activity based on a suggested analogy between the cortex and the ferromagnet, and noted that the non-specificity of organization would result in a degree of immunity to lesions and malfunctions not present if specific circuits were used. This point was also made by Beurle (1956) who studied the spread of activity, essentially cooperative in nature in a mass of randomly connected units which could regenerate pulses. Each unit served as a common storage point for information con- cerning a large number of occurrences, and information about a particular event was stored in a large number of places. Beurle commented that this multiple diversity eflect would readily account for the equipotentiality of limited regions of the cortex (Lashley, 1929). This organization also appears to be responsible for the immunity to a certain amount of damage of perceptrons (Block, 1962).

In a certain sense, these integrative properties are obtained rather fortuitously. The models themselves represent the outcome of a process wherein as much of the known structure is axiomatized of the biological system in question, and the resultant be- haviour is then studied. This approach is essentially that of the natural philosopher or scientist. There exists a complementary approach, however, that of the engineer, which consists in axiomatizing or specifying behaviour, and then attempting to design or construct structures that will realise it (Uttley, 1961). The engineering approach to problems of biological integration was initiated by McCulloch and Pitts (1943), Wiener (1948) and Von Neumann (1951) in their studies of cybernetics and auto- mata theory. The problem of biological integration, now formulated as that of constructing or designing reliable automata from components of low reliability, has been studied extensively in recent years. One of the earliest results was obtained by Von Neumann (1956) who showed how redundant automata (i.e. automata comprising many more components and connections than would be absolutely required to execute given processes) could be used to perform given computations (logical decision-making and coding) at much lower error-rates than those of the given components. This property resulted from the use, in place of single components, of redundant aggregates of like components that operated on a repeated signal, on a majority logic principle (‘what I tell you three times is true’) to produce a correct output. The organization of these aggregates was very much more specific than the non-specific systems previ- ously discussed, and consisted of many repeated circuits with some small degree of interaction between them. Only a certain amount of local randomness of intercon- nections was permitted. The reliability of such automata increased with their re- dundancy, but at a rather slow rate, viz. redundancies of the order of 10,000 : 1 were required, given components having error-rates of 5 x per bit, to obtain overall error-rates of 10 -10 per bit. It was pointed out by Allanson (1956) that a more plausible References p . 25/26

Page 33: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

24 J. D. C O W A N

model for integrative structures in the central nervous system would be one wherein much more complex components existed, having many inputs rather than the few inputs of Von Neumann’s elementary components. It was shown how such complex components might themselves control synaptic errors, in a more efficient manner than redundant aggregates of simple components, simply by using replicated synapses together with the majority principle. Muroga (1960) and Verbeek (1962) carried this work a step further by combining both these techniques. That is, redundant aggregates of complex components were designed to control both synaptic and compu- tational errors. The redundancies required were much smaller than 10,000 : 1. Specific models for neural organizations designed to execute given processes such as classifi- cation (Uttley, 1954) or mnemonization (Roy, 1960) have utilised either the majority logic principle or else simple circuit replication.

It will be seen that essentially two different types exist of models for biological integration. The first type, based on an axiomatization of the known structures of biological systems, results in models having a heterogeneous and diffuse structure, while the second type, based on an axiomatization of the required behaviour of integrated systems, results in models having a more homogeneous and well-localized structure. The fact that these latter structures bear little resemblance to biological structures is not surprising. Axiomatization of behaviour does not result in a unique structure; in principle many different structures could realise the desired behaviour. It is also to be noted that the specific structures appear to be less efficient (require more components), in achieving given error-rates than do the non-specific structures. It is clearly of interest, to consider other ways of constructing reliable automata that are both more efficient and more diffuse in their organization.

A recent development in information theory (Cowan and Winograd, 1962) in fact contains in essence, methods for realising such automata. It has been shown that the noisy channel coding theorem (Shannon, 1948) may be extended to deal with compu- tation in the presence of noise, in addition to communication, provided certain as- sumptions are made. The results of this are that reliable automata may be constructed from components of low reliability, in such a way that much smaller redundancies are required than those in earlier constructions.

It has been demonstrated that errors of interconnection may also be combatted, so that for large enough automata, both component behaviour and interconnectivity may be to some extent random, yet such automata may still function with very low error-rates.

The type of organization required in such automata is of a diffuse nature: each component computes some composite function of many of the functions that have to be executed by the automaton, and any one of these requisite functions is executed by many different components. The resultant diffuseness, or multiple diversity of structure and function, is associated with very low error-rates in the overall functioning of the automaton. Such automata are in fact (a) heterogeneous and non-specific in functional organization, (b) not completely determinate in structural organization, and (c) highly efficient.

There thus appears to be a link between the scientific and the engineering models

Page 34: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

B I O L O G I C A L I N T E G R A T I O N 25

of integrated biological systems. Uttley (1954 et seq.) has in fact shown how specific replicated circuits might arise by chance, in a non-specific or randomly connected network, thus providing the basis for an indirect connection between the two sorts of models. If a direct link between such classes of models can be shown to exist, then a fortiori, the engineering approach to problems of biological integration becomes more meaningful. It is our intention to demonstrate this link, and to apply the statistical theory of information processing in automata to the construction of concrete models of such biological systems as the cerebral cortex, and certain macromolecular com- plexes.

A C K N O W L E D G E M E N T

This work was supported in part by the U.S. Army Signal Corps, the Air Force Office of Scientific Research, the Office of Naval Research, and in part by the U.S. Air Force (Aeronautical Systems Division) under contract AF33(616)-8783.

S U M M A R Y

The necessity to use many-valued logics and/or information theory in the case in which noisy units and noisy connections are given, is discussed. We demonstrated that redundant computers exhibiting arbitrarily low frequencies of error (apart from errors in the final outputs) may be constructed so that they are not completely re- dundant, but process a finite fraction of information. This depends critically upon the error behaviour of components as a function of complexity. If component errors increase with complexity, this reliability may be obtained only by decreasing the fraction of information processed in the computer. However, it is possible to maximize this fraction for given components and codes. Another important result is that such a computer need not be precisely connected and, in fact, a certain bounded fraction of errors in connection may be tolerated. The application of these results to the construction of mathematical models of cortical structure is considered.

REFERENCES

ALLANSON, J. T., (1956); The reliability of neurons. Proceedings 1st International Congress on Cyber-

ASHBY, W. R., ( 1 950); The stability of a randomly assembled nerve-network. Electroencephalography

BEURLE, R. L., (1956); Properties of a mass of cells capable of regenerating pulses. Philosophical

BLOCK, H. D., (1962); The perceptron: a model for brain function, I. Reviews of Modern Physics,

COWAN, J. D., AND WINOGRAD, S., (1962); submitted for publication in Philosophical Transactions. CRAGG, B. G., AND TEMPERLEY, H. N. V., (1954); The organisation of neurons: a cooperative analogy.

LASHLEY, K. S., ( I 929); Brain Mechanism and Intelligence, a Quantitative Study of Injuries to the Brain.

MATTHYSSE, S . W., (1959); Thesis, Princeton, Princeton University Press.

netics, Namur, 681-694.

and Clinical Neurophysiology, 2, 411482.

Transactions, B, 240, 55-95.

34, 123-135.

Electroencephalography and Clinical Neurophysiology, 6, 85-92.

Chicago, University of Chicago Press.

Page 35: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

26 D I S C U S S I O N

MCCULLOCH, W. S., (1960); The reliability of biological systems. Selforganizing Systems. M. C. Yovits

MCCULLOCH, W. S., AND P m s , W., (1943); A logical calculus of the ideas immanent in nervous

MUROGA, S., (1960); Rome Air Development Center. Technical Note 60-146. PAULING, L., (1960); Errors in Protein Synthesis. Festschrift fur A. Stoll. Berlin, Springer. QUASTLER, H., (1957); The complexity of biological computers. IRE Trans. EC-6, 3, 191-194. ROY, A. E., (1960); On the storage of information in the brain. Bulletin of Mathematical Biophysics,

SHANNON, C. E., (1948); Mathematical theory of communication. Bell System Technical Journal, 27,

UTTLEY, A. M., (1954); The classification of signals in the nervous system. Electroencephalography

UTTLEY, A. M., (1961); Progress in Biophysics 11. London, Pergamon Press. VERBEEK, L. A. M., (1962); On error minimizing neuronal networks. Symposium on Principles of Se/f

Organization. London, Pergamon Press (p. 121-1 33). VERVEEN, A. A., (1960); On the fluctuation of the threshold of the nerve fibre. Structure and Function

of the Cerebral Cortex. D. B. Tower and J. P. Schadk, Editors. Amsterdam, Elsevier (p. 282). VON NEUMANN, J . , (1951); The general and logical theory of automata. Cerebral Mechanism in

Behavior. L. A. Jeffress, Editor. The Hixon Symposium. New York, Wiley (p. I). VON NEUMANN, J . , (1956); Probabilistic logics and the synthesis of reliable organisms from unreliable

components. Automata Studies. C. E. Shannon and J. McCarthy, Editors. Princeton, Princeton University Press (p. 43).

WErss, P., (1959); Quotation. R. Gerard, Editor. Symposium on Concepts of Biology. New York, National Academy of Sciences.

WErss, P., (1962); From cell to molecule. Symposium on Molecular Control of Cellular Activity. New York, Wiley.

WIENER, N., (1948); Cybernetics. New York, Wiley.

and S. Cameron, Editors. London, Pergamon Press.

activity. Bulletin of Mathematical Biophysics, 5, I 1 5-1 33.

22, 139-168.

379-423; 623-658.

and Clinical Neurophysiology, 6, 419-494.

D I S C U S S I O N

WIENER: There are several things here that interest me very much. Firstly, the algebra-of-logic, the yes or no of the nervous system. It looks superficially as if the simple stage of nervous activity were a yes or no stage. It need not be. When we have the all-or-none law in the nervous system, as i t is usually given in the case a spike propagates itself or the spike disappears, which are usually regarded as the only possible alternatives, that is not a fair representation of what happens. It takes some distance along the neuron for the spike either to assume a stable form or to disappear. These are with respect to the individual neuron long-time phenomena. The importance of this is that the element, the elementary action, in the neuron is not the algebra-of- logic action given here, but a much more complicated action. In the short fibers of the cerebrum it is at least highly questionable whether the runs are long enough to give such a definite yes or no. In other words, when we go to the elementary machine in the nervous system, the degree of complexity of the nervous system is not given simply by the number of neurons, but by the number of stages of activity in each neuron. This is enormously greater, and this may be very important.

The other thing I want to speak about is the statistical mechanical implication. I am quite convinced too that this work is very closely allied to general statistical mechanical work. For that you will have to consider non-linear changes in the neuron. These changes need not be of the all-or-none nature. This all-or-noneness in that

Page 36: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

B I O L O G I C A L I N T E G R A T I O N 27

case as well is a result of a simplification. This sort of work will be most necessary in long distance forces, with plasmas for example. I am working now on a form of statistical mechanics for that. What is relevant in this connection is that the model I am using does not go back to discrete particles, but goes back to a continuum in which the number of degrees of freedom effectively is kept finite, not by the existence of individual particles, but by certain limitations on the harmonic analysis in space. I feel that this sort of work is going to be useful also in the discussion of nerve nets. That is, I think we need a thorough theory of random functions here and I think that while pure atomism both in neurology and in particle physics is itself one way of approach which has a great deal of validity and which we cannot afford to supersede completely, there are other formalisations which may be equally useful or more useful in many cases.

TEN HOOPEN : The mathematics of these nets of formal neurons is very interesting indeed. However, I am bothered by the last sentence of your abstract: “The intention to consider the application of these results to the construction of mathematical models of cortical structure”. Do you mean with this what you said about Paul Weiss’ con- jectures? This is asked in view of what Dr. McCulloch stressed : you are using idealised, formal neurons. Could you say something about the relation between these neurons and those actually found in the cortex?

COWAN: In our own group Dr. Lettvin has argued that it is illogical to attempt to apply automata theory to nerve nets. He argues that there exist units which have as many as 100,000 inputs and 1 output ramifying to many places. The geometry of the units is all important and there may be up to 10,000 different kinds of units in the nervous system. Furthermore, he argues that all kinds of slow potential changes occur within such a unit, so that one cannot in fact talk of it as an automaton. But yet when it comes down to a final analysis, that unit with its 100,000 discrete inputs and 1 output, is afinite automaton. It does not matter what is going on inside, i.e. what kind of slow potential changes are occurring, you have got discrete impulses as inputs. It is true they get transduced into all kinds of slow potentials and thus all kinds of electrochemical effects occur, however, the final output again is a propagated action potential, and patterns of impulses emerge from the unit. It therefore seems to me that one can still apply statistical theories of automata to nerve nets. Statistical methods afe of course extremely weak for such an application, and will not work for specific cases. You have to put this specificity in. But such theories certainly might tell one something more about the qualitative behaviour of a large mass of interacting units, that exhibit integrated behaviour.

GEORGE: Do the neural net systems you have, make in any way an attempt to describe growing processes? Do you have in mind an adult neural net?

COWAN: We are not even as specific as that. Some work has been done at the Carnegie Institute of Technology by W. H. Pierce on adaptive systems which can learn to correct their errors by finding out which inputs are less reliable.

GEORGE: These prewired systems could not stimulate growth as they stand. COWAN: As they stand, no, but it would be very easy to introduce growth if one

had sufficiently complex units.

Page 37: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

28 D I S C U S S I O N

GEORGE: It occurs to me, not too seriously perhaps, that the girls who wire the network to whom you ascribe an error probability function, represent the growth process of your machine. If these girls were replaced by machines, the constants on the machine may be of real interest.

BOK: I would like to ask a question. In your system you have put in something, you get out something and now your idea is that you have certain laws between the input and the output. Why don’t you take the input only and use that input to ask the question: what is the function of our nervous system?

COWAN: I should have probably stated at the start that this analysis is really secondary because one is assuming that one already knows what one wants to do, that one has a blueprint for an ideal machine. Much more important questions concern the possibility of actually duplicating the given natural components that we have and of actually discovering what the random system is doing, but that is not what we have attempted here.

RASHEVSKY: I would like to make a few remarks. Can it be that nature acts in some way as an engineer would probably act, without attempting to explain how and why the whole thing happens. After all, in physics we accept postulates and principles which we cannot visualize and explain. Light is both a wave and a particle and this caused many headaches to the physicist some 35 years ago but now we just accept this idea and do not analyse it further. It may be that we will have to accept that the brain was designed by a very smart engineer, and we might not be able to find why it happens so. On the other hand, there may be an evolutionary principle involved allowing the better designed system to survive. We cannot actually conclude that this is an evolutionary process, until a thorough mathematical investigation of certain other aspects of the evolution is made.

HOOGENSTRAATEN : Does your proposed method, for eliminating the non-reliability of components and/of wiring and for obtaining in this way a reliable assembly, remove the contradiction met with when considering finite and strictly causal machines (with infinite tape), giving correct answers to logical propositions. With such machines, there can always be found a proposition which is intuitively true, but which cannot be proved by the machine. Identification of our brain with such a computer would give rise to a contradiction.

COWAN: I am not sure if I understand your question exactly, but there are certain features of complex machines like this that are associated with an indeterminacy. Suppose you had a machine that is actually constructed so that it controls mal- functions and ‘survives’ in a noisy environment, A necessary property of such a machine is that any one unit interacts tremendously with many others, and any one unit performs some composite of many of the things that are necessary for the survival of the system. The logical description of the input-output relations of this system requires only two-valued Boolean logic. But the logical description of relations within the system requires many-valued logic. One only goes from the many-valued to the two-valued logic when one goes from the inside of the system to the outside. And in fact given knowledge of input-output relations outside, one cannot infer anything about the inside, and the output of a single neuron (i.e. something inside) does not

Page 38: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

B I O L O G I C A L I N T E G R A T I O N 29

tell you anything about the system either. So there is a basic uncertainty in the whole business between what is inside and what is outside.

STEWART: This sort of theory seems to assume that at any time there will be more or less constant fractions of faulty components. There seems to be then an implication that a component may at some time fail but it really does not stay failed. Did you look into cumulative failures?

COWAN: It turns out that if you go to sufficiently large machines that it does not matter too much whether the components malfunction intermittently or whether they fail permanently, provided only that such errors are not catastrophic.

STEWART: I mean, let us assume that something fails and once it has failed stays that way, is that perhaps then a little bztter model of the way some things occur in living organisms.

COWAN: If all failures are permanent and go on increasing, you cannot do very much about it, except using more and more components or in fact using replacement techniques. You need a kind of dynamic sort of process to take care of that.

COLDACRE: I am interested in what you said about the ability of your system to tolerate errors in its internal connections, and I was wondering whether you could give a numerical estimate of this in particular instances. For example, how big would your system need to be to tolerate, say, 50 ”/, of errors in its internal connections, and is there any possibility of a large system working with entirely random connections?

COWAN: Yes, but the computer would have to be indefinitely large, and it would depend very much on the actual noise of the components.

UTTLEY: It is not just the size of the machine which comes into question, but how complicated the computing elements are.

COWAN: Let me give the following example. Suppose errors of interconnection occur with a probability of one-half, and suppose components are available, which have a computation capacity of 0.9 (i.e. 0.1 bits of output information are lost due to noise). Then we have the result that the redundancy R, that must be used in an automaton if errors of connection are to be safely ignored, is given by the inequality

where C , = 0.9 and p = 0.5 (see Cowan and Winograd, 1962). This results in the value R = 2.5. That is, provided every 5n components do the job of 2n, then for sufficiently large n, an automaton may be constructed that functions reliably, despite errors of connection occurring with probability 0.5. Thus a large enough system could work with entirely random connections, However, the components of such a system would have to be very complex, computing functions of very many inputs, with a computation capacity of 0.9, otherwise the above result would not follow.

Page 39: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

30

The Neuron as a Synchronous Unit

P. L. LATOUR

Institute for Perception Physiology, Soesterberg (The Netherlands)

In many sciences the improvements in instrumentation proved to be very useful. I n the field of medical electronics many phenomena have been discovered with the help of correlators, averagers, pattern recognition machines and so on. When using precise timing devices in the measurement of reaction times of the eye to visual stimuli, one may find a fine structure in these reaction times. The findings are, in short, that there seem to be about equally spaced moments at which the movement of the eye prefers to start.

The period found in these experiments is both related to the a rhythm of the subject during rest, and the average ‘evoked’ response while the subject is being tested. Data like these seem to point towards some kind of a built-in clock, which is easily reset by stimuli.

In our effort to gain some insight into the underlying mechanisms, we considered neural networks with ideal neurons of the McCulloch-Pitts type. These neurons have a threshold measured in units of excitation, they will fire after a synaptic delay if the sum of the excitatory and inhibitory impulses is equal to or above threshold. Further- more it is assumed that there is no delay in the circuit.

With these neurons we can build synchronized devices trying to simulate real neural networks. For our model we chose the Boolean variables a, 6, c . . . . with possible values 0 and 1.

We will now describe the firing conditions of the neurons in these Boolean variables. Let us consider the neuron in Fig. 1 with threshold 2.

If, for example, input b is energized, the neuron will fire. Also if both a and b are energized. We can make a truth table for this neuron, which gives the conditions under which it will fire:

a b C f

0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 1 0 0 0 1 0 I 0 I 1 0 1 1 1 1 1

Page 40: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

T H E N E U R O N AS A S Y N C H R O N O U S U N I T 31

t Fig. 1 . For explanation see text.

The firing conditions of this neuron are in Boolean variables:

with k = 1 - k -

f = i b c + abc + abc

Suppose we have a network consisting of several neurons. At time t = 0 we have a certain firing pattern, after the synaptical delay 6 we find the next firing pattern caused by this pattern at t = 0. The succession of the patterns is given by the built-in logic of the circuit, or, to put it differently, by the transformation formula of the firing conditions of the neurons. In general these are:

in which xi = 1 if the ith neuron fired at t = T and xi = 0 if it did not fire at t = T, and xi = 1 if the ith neuron fires at t = T 4- 6 and xi = 0 if it does not.

We are now very interested in the behaviour of the circuit as time goes on. In the special case that the equations ( I ) are linear in xi , a general solution of the future behaviour of the circuit is easily found. Let us consider, for instance, the set given by:

x i = x2

x2 = x3

xg = x1

This is a very simple circuit: if at t = 0 the first neuron fires, then at t = S the second neuron fires, at t = 2 6 the third, and at t = 3 6 the first again. The equations in this example may be written as :

4010\ {xi}’ = M { x i } in which M = 001

L O O ]

Thus given the firing pattern at t = 0 we can directly find the firing pattern at t = n 6 by putting:

{ x i } * = M n { x i } = M n - 3 k { x i } because M 3 = E

It is, of course, not by coincidence that the circuit as well as the matrix M have a period of three.

The general case cannot be handled this way because a non-linear system cannot be described by such a matrix. To find the general solution we have to proceed along different lines. If we have n neurons then we have 2 n different firing patterns. These patterns we denote by binary numbers, from (000 . . . .OO) till (1 11 . . . . 11) - no neurons firing - all neurons firing. The logic in the circuit determines the sequence of the patterns. The complete graph of all the patterns with their sequences we call the state diagram of this network.

Page 41: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

32 P. L. L A T O U R

Our original formula about the firing conditions of each neuron in its relation to the other neurons can also be transformed, following certain rules, into firing con- ditions of the possible firing patterns. I n general we then get a set of 2n equations of the form:

(XIXZX3. . . . xn)’ = p1 (X IX2 . . . . xn, f1XZ . . . . Xn, . . . ., f l f 2 . . . . f n )

These equations arelinear in the new variables because this set just forms the canonical expression for these Boolean functions.

We will illustrate the procedure for the non-linear case with the following example. Let us consider the set:

( f l f 2 f 3 . . . . 2n)‘ = v: (X1xZ. . . . Xn, flxZ . . . . Xn, . . . . I f l 2 2 . . . . a n )

x; = 1 - x1

x; = 1 - X I - x2 + 2XlX2

X i = 1 - XI - X2 - X3 + X1X2 + 2X1X3 t 2X2X3 - 2X1X2X3

If we transform this set of equations into the canonical space we find the following transformation :

(XlXZX3)‘ = f I f Z f 3 (xixafa)’ = f1 fax3

(flXZx3)’ = x1x2x3 (31x223)’ XlXZf3

(Xlf2X3)’ = flX2X3 (XIf2f3)’ == fIX2.f3

(flfZX3Y = X1f2X3 ( f i f 2 f 3 ) ’ = xifa23

The matrix for this transformation is given by: 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0

M = O 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0

The characteristic equation of this transformation is I M - il I = (1 - As). When we plot the state diagram of this circuit we find:

XlfZf3 flX2.h j XlX223 --L f l f 2 X 3 XlfZX3 --f f Ix2X3 j XlX2X3 f I f 2 f 3

In this case we find a period of eight, which is equal to the power of il in the charac- teristic equation.

Note that this case is non-linear. If at t = 0 no neurons fire, then at t = S all neurons fire.

We can prove in general that, given a matrix of this transformation has a charac- teristic equation of the form:

I M - a I = a a o n,, (1 - A a i )

in that case, and only in that case, we have circuits with periods at and a0 states that are not periodical, and the inverse of the theorem. For instance, a state diagram like Fig. 2 has a characteristic equation of the form A3 (1 - 13) (1 - Az).

e - - - L L & Fig. 2. For explanation see text.

Page 42: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

T H E N E U R O N AS A S Y N C H R O N O U S U N I T 33

P R O O F OF T H E THEOREM

First we notice that all the states of a circuit, interpreted as vectors in a 2n dimensional space, are independent.

Let us assume now that the circuit under consideration with matrix M has certain cyclic state groups. Suppose we have cycles with periods p , q, r, . . . . . The first cycle consists o f p elements or states PI . . . P, with

M P i P c + l ; M P p = P I

The subspace defined by the vectors PI . . . . P, is invariant with respect to transfor- mations by M .

In this subspace we have a basis of eigenvectors ei with certain eigenvalues At. For our set Pi we may now write: PI = Zat.ei. We now apply A4 and find: PZ = Zai It er. Repeating this procedure: P , = Zai If- ' .ei . . . . A .

We first notice that all ai # 0, otherwise the p dimensional space could be generated by only part of the basis.

We have the relation:

The matrix on the right side consists of the p independent pi's, thus its determinant must be different from zero. The determinant with at is also different from zero be- cause ai # 0. Thus the determinant of the first matrix must be different from zero. Thus Ii # Aj ( i # j ) .

Putting again P I = L'ai ei and applying M p times we get PI = Zai A p ei or ( I p - 1) = 0. This equation has p different roots and as we have seen above we need them all.

Thus for this subspace the characteristic equation is given by:

( A P - 1 ) = 0

The same procedure we apply to the other subspaces of M and we get as characteristic polynomial for the cyclic subspaces :

17,, ( I -Pi) = 0 ac = p , q, Y,. . . . .

Finally we consider the subspace of M constituted by the states that are not in a cyclic succession. For this subspace all eigenvalues have to be zero, otherwise part of it could be invariant. The characteristic polynomial thus becomes

I M - A I = Aao IT,, (1 -A"%) with 2 at = 2n

On the other hand, given M has a characteristic polynomial of the form

Aao ITac (1 -A"%) = 0

it is clear that: (1) the subspaces defined by the roots of I in

( I --Pi) = 0

Page 43: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

34 P. L. L A T O l J R

are invariant; (2) that for any vector in this subspace the relation holds:

Mar ( V ) = v,

while

~ * i ( v ) f: v for (at, bz) # at

To illustrate this approach let us consider the following problem. We want to design a device with the following properties. If the input signal is zero, there is no output signal corresponding to this input signal. If we have one pulse at the input, we find one pulse at output H . If we have any sequences of pulses longer than one at the input we have a sequence of pulses at output C.

This is, as you may understand from the properties, the temperature sensing device which gives the wrong sensation for a cold stimulus if it is applied for too short a time.

Fig. 3. State diagram of a temperature sensing device with the properties as described in the text.

In designing this device we use directly the state diagram as given by the properties. At rest the system is in state (00). If one pulse enters the device this changes froin state (00) to state (01). If there fails to appear a pulse in the next interval of time the device goes to state (10) which is critical for the erroneous perception of warmth, from state (10) the system goes back to state (00) if there is no input present.

If, however, two or more pulses are fed into the device it will go to state (1 1) via state (OI), and state (1 1) is critical for the perception of cold. If the sequence is ended it goes directly from state (1 1) to state (00).

The design of this device with the help of a state diagram has been very straight- forward. From this state diagram we can derive the transformation formula for the neurons in our device, and once we are so far we have to think about the effectuation of this formula with neural logic.

In this fairly simple case a solution, as far as I know the simplest solution, can be found.

Page 44: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

T H E N E U R O N A S A S Y N C H R O N O U S U N I T

from receptor

35

Fig. 4. Neural network satisfying the conditions of the state diagram of Fig. 5.

As you can see in Fig. 4 the logic has been condensed in only two neurons, while the other two neurons are necessary to couple the two critical states (10) and (11) to the temperature perceptors.

Reconsidering the analysis of networks with our ideal neurons in the light of reality we feel that somewhere truth must have slipped out. There are, in fact, many serious objections to consider the brain or parts thereof as a synchronous logical device.

We might expect, however, that in a statistical sense the relations and properties as outlined before, may hold. When we consider the system not as a synchronous, auto- nomous device but as a device with fluctuating thresholds, many noisy inputs and artefacts we do not know about yet, some of the remarks may still be consistent. We could imagine that the whole population of neurons at rest - in the sense that there are no trivial inputs - could be described by a statistical state diagram of cyclic origin, the period of this cycle being determined by the gross properties of the neurons under consideration. Once a signal enters this ‘rhythmic chaos’, such as a light flash, a click, an idea, then statistically this system is brought into a new state, depending upon the conditions at the moment of the input, the input itself and the state of the system. From this new state the system then seeks its way again to the statistical cyclic set.

stimulus

Fig. 5 . Without inputs the brain is in a sequence of cyclic states Sr; every state facilitates or inhibits a specific response. When a stimulus is applied, the system is brought in a new set of states Si, which

leads, after several intermediary states, again to the cyclic set S,.

Now we will return to the experiments. It has been shown that at the very moment when the eye starts its movement, the

Page 45: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

36 P . L. L A T O U R

threshold for visual perception is raised. In a somewhat extended experiment of this kind we presented a flash of 20psec about 3 log units above threshold, to an eye which was about to move, was moving, or had just finished moving. We plotted a curve for the chance of perception vs. time in respect to the onset of the eye movement (Fig. 6).

. . 5 0 m s e c Ornsec 5Omsec

Fig. 6. The chance of perception of a flash of short duration in relation to its presentation with respect to the onset of the eye movement.

The most remarkable thing about this curve is that the threshold is raised before the eye starts its movement. Therefore, the brain anticipates this movement and it is the blind mind which allows the eye to move. There is, however, one more remarkable point. There is another period of relative blindness, following the eye movement. During this period the eye did not move. But we think that if the eye was to move again it would have moved during this period, and if we had extended our range of measurement we might have found more dips than two, in fact a whole train of holes in visual perception. I need not stress the resemblance of this point, the fine structure in reaction times, the properties of the model and the periodic patterns in the EEG.

All this evidence points towards some kind of a built-in clock which carefully programs our motor actions.

S U M M A R Y

The sequential behaviour of a set of idealized neurons is investigated with mathematical methods similar to those in the theory of the linear sequential networks. If such an approach to describe cortical events is assumed to be valid, it is very simple to explain the periodicities found in reaction time experiments.

Page 46: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

37

Finite Automata and the Nervous System

F. H. GEORGE

Department of Psychology, University of Eristol, Eristol (Great Britain)

I . I N T R O D U C T I O N

In this paper it is hoped to try and further the work done from the conceptual point of view towards building up a model of the human nervous sytem.

Cybernetics is concerned with building models of all kinds, not merely models that simulate human activity in any form, but in this paper we shall not only concentrate on the simulation of human behaviour, but more narrowly on simulating the activity of certain aspects of the human nervous system.

This presents some difficulty because of the interconnected manner in which one suspects the brain - and for that matter the whole of human behaviour - operates. What I want to talk about is human thinking and the way in which the nervous system may function to make this possible, but this pre-supposes a great deal about perception and other features of the brain’s function that I do not wish to discuss and yet are undeniably relevant. Furthermore, I want to appeal to neurophysiological evidence and do not pretend to have expert knowledge of this field. This, of course, lays GG open to the obvious criticism of amateurism, and the only excuse I can offer is that from the cybernetic point of view, actual nervous systems and conceptual nervous systems are not really very different. The point being, of course, that when neuro- physiologists tell us how they think the nervous system works they are, in fact, de- scribing a conceptual nervous system, and the advantage of starting at the other end (the conceptual end) of the proceedings is that the model so described is capable of being precisely tested. This is not, quite obviously, meant to imply that empirical studies of actual nervous systems are unimportant; they naturally are the whole basis of the subject. But such studies do not represent the only possible approach to an understanding of the brain.

First of all we must consider our method of model construction. Of all the kinds of automata that have been suggested, it is natural that we should consider those that are called logical nets, or neural nets as we shall call them in this paper. Very briefly these are collections of cells that are represented diagrammatically by circles (1 epre- senting nerve cells), and which are connected by lines (representing nerve fibres) and the fibres have two different sorts of endings on the cells, these are excitatory and inhibitory (represented in our diagrams by closed-in triangles and open circles re- spectively; Stewart, 1959). Within each circle appears a number (a real integer) that References p. 52

Page 47: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

38 F. H . G E O R G E

is the threshold number of the cell and which simply says what the balance of live excitatory over live inhibitory inputs must be at any moment to cause that particular cell to fire. The words ‘to fire’ mean ‘to make the output live during the next instant of time’.

Any such assemblage of cells is assumed to operate on a time basis, so that any cell takes one instant to fire and the pulse is assumed to traverse the fibres instantaneously. Fig. 1 shows a simple assemblage or network and the conventions mentioned can

Fig. 1. A simple neural network, showing a method for deriving partial self-classification.

easily be traced out from this (George, 1961). Now the most obvious things to say about such models are: (1) that they can be manufactured in hardware by the use of almost any two-way switching system; (2) that they can be simulated on a general purpose digital computer; (3) that they have a precise mathematico-logical description ; and, most important of all: (4) that they are not, in spite of their obvious resemblance to, exactly like, the nerves of a human nervous system.

So let us ask in what way they differ from the known properties of actual nervous systems.

1. In practice there is a great deal of variation in the refractory period of nerve cells, and in our nets we do not distinguish at all between different cells in this respect.

2. The speed of conduction along actual nerve fibres varies very considerably, whereas our nets all have constant speed fibres.

3. There are, in actual nervous systems, interactions across the membranes between different nerve cells, and this does not happen in our neural nets.

4. There are biochemical and electrical changes going on in nerves when activation is taking place and obviously neural nets do not take account of this.

5. Finally, of all the other differences, the most obvious is perhaps that real nerves can be classified into those with myelin sheaths and neurilemma and so on whereas neural nets cannot.

Over and above this there are many characteristics of the nervous system such as summation, fractionation, recruitment, etc., which we must be prepared to simulate if our conceptual nervous systems are to approach more closely to the real thing.

We must cover these points straightaway, and our main argument is that we can overcome the variation in speeds (this covers 1. and 2.) of conduction and other differences, due to different refractory periods, by the use of delay elements in our model.

Page 48: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F l N I T E AUTOMATA 39

The electrochemical and other organic factors which make up the nervous impulse, we shall assume to be connected with the manner in which information is processed in the nervous system, and not with the actual flow of information itself. I n other words, our conceptual nets can be regarded as passing the same information, without going into physico-chemical details.

Now the final point is that summation, fractionation and recruitment (George, 1961) are all easily simulated, and indeed are even conceptually necessary in any system which is transmitting information from place to place (e .g . the retina to the visual cortex), while needing to economize on the number of fibres available.

One other point to mention here is the relation between digital and analogue types of model. Without wishing to argue about the use of analogue methods in the brain, it can certainly be said that analogue methods can be simulated perfectly well in digital terms, by taking over digital measures as small as we please. This then is no barrier to the representation of analogue processes in neural net terms.

But this is not the end of our conceptual difficulties. We are suggesting a model of cells that are all pre-wired to simulate a system which starts in some sort of coded form and only grows, in the course of time, to the fully wired system that we have in mind. This means, as many people have suggested, that the right model for a human nervous system is not a pre-wired model at all, but a model capable of growth. This is one of the central considerations in this paper. In spite of this we shall press the case for the pre-wired model.

We must deal with this point about growth, to some extent, immediately, however briefly. During the growth period from the embryonic stage until adulthood, the nervous system grows and modifies itself, and to accurately simulate this period of development suggests the need for growth nets (Chapman, 1959; Pask, 1959) or some sort of growing model. But we might still reasonably argue that the completed nervous system can be accurately mirrored by a pre-wired model. In fact, of course, (Hebb, 1949) we know that this itself may not be strictly accurate as the adult may still modify this internal nervous state by a growth process such as that suggested by Kappers.

One of our central questions is now: to what extent is the fact that growth takes place in the early phase of development a factor influencing the completed system. In other words, let us start with our pre-wired form of model and examine some of its limitations, some of which exist because it has grown to the adult state, even assuming this adult state to be pre-wired.

Another central consideration of this paper is the organization of the human con- ceptual system, and here we can only outline some suggestions in general terms, and earmark some methods for furthering such a study.

This is a cybernetic paper, not a neurophysiological one, and the questions being asked are part of an attempt to find some answer to the question: which sorts of conceptual model of the nervous system are likely to be rewarding in studying the functioning of the human brain? As Kleene puts it: ‘These assumptions (those em- bodied in neural nets) are an abstraction from the data which neurophysiology pro- vides, The abstraction gives a model, in terms of which it becomes an exact mathe- References p . 52

Page 49: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F. H. G E O R G E 40

matical problem to see what kinds of behaviour the model can explain. The question is left open how closely the model describes the activity of actual nerve nets; and some modifications in the assumptions lead to similar models. Neurophysiology does not currently say which of these models is most nearly correct - it is not plausible that any one of them fits exactly.’

So our aim is to narrow the field of possible conceptual models.

11. N E U R A L A N A L O G U E S O F C E N T R A L N E R V O U S O R G A N I Z A T I O N

We must try to give some sort of plausible account of brain function in terms of the principles and methods already at our disposal, if only to discover in what ways we still fall short of the desired aim. These principles and methods include classification in various forms, specific analyzing mechanisms and the general methods amenable to pre-wired systems.

In the first place let us assume that our perceptual system is built up on the basis of classification, coupled with special analyzing mechanisms. The principles are those suggested by Hayek (1952), Uttley (1955), Chapman (1959) and others. As far as vision is concerned, at the periphery we might expect to find some such organization as suggested by Osgood and Heyer (1952). This suggests that the retina transmits to the visual cortex normal distributions of excitation such that the maxima of the distri- butions represent ‘seen’ contour lines. Eye movements are essential to this model, which further presupposes the interaction of distributions causing distortions and displacements in the visual system. To avoid the difficulties attendant on visual dis- crimination, it is probable that the Osgood-Heyer model needs to be supplemented by the following postulate:

With fatigue, the distribution of excitation in the visual cortex f lattens out, decreasing the maximum value of the distribution and increasing its standard deviation.

This paper is not intended to discuss the rightness or wrongness, nor the details of this sort of model. We are concerned with central organization, and will thus dwell only on those sensory features which are inescapably bound up with that central organization. In other words, regardless of the particular organization of the sensory cortex with respect to the sensory modalities, some central features of organization still remain more or less inevitable - or so our theory presupposes.

In fact we meet an important difficulty immediately (Pribram, 1960) as it seems likely that the sensory inputs are modified directly by the nervous system. This we shall mention again later as it is central to the new point of view.

Now in crude terms what we want is a system that is energised from within; thus the limbic system is thought to be made up of homeostats and has its homeostatic limits changed as a function of central experience. The reticular formation might be expected to work here with the limbic system in determining degree of self-awareness or consciousness. But the more important point is that whether the energising comes wholly from within, or from without in conjunction with internal energising, the brain must function itself as a comparator, and a modifier of the homeostatic devices, and indeed as a self-modifier with respect to its sensory and motor experience.

Page 50: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

46 F. H. G E O R G E

dynamic and dependent upon such things as the relations between electrical potentials at different points where the synchronization of firing is a vital factor in a stimulus eliciting a response.

At the same time storage must also involve the setting up of connections between the sensory and motor system. While talking of a central classijication system to simulate the process of thinking and especially abstract thinking, we are reallv talking again of density of connection or sensitivity to inputs and outputs. It should again be mentioned (Stewart, unpublished) that the emphasis is now swinging away from the idea of a sensory control of the brain’s activities to motor control. The motor oper- ations are feeding back in the brain in a homeostatic manner and modification of inputs occurs right at the moment of their first impinging on the human sensory system. This simply means that the brain is not just an input-output system but a graded control system with emphasis much more on output than input. This empha- sises, if we may be pardoned for using this sort of terminology, the searching and purposive aspects of human brain function.

To put this all into cell assembly terms, we are saying that cell assemblies are the method of storing that is the most important to the human being, and this is a self- modifying process which works selectively and progressively short-circuiting the number of cells involved in a particular conceptual association say, as the person passes from learning to having learned. In neural net terms this implies the use of cells that are not specific to a particular set of items learned, and this can be catered for in a pre-wired system (the principle illustrated by the net of Fig. 1). It must be ad- mitted that the method does not immediately commend itself on grounds of neuro- physiological plausibility. Resisting once more the temptation to abandon neural networks, it can be guessed that, neurophysiologically speaking, more plausible nets can be drawn, but the temptation is to accept the Chapman type of growth net, which although pre-wired is self-organizing and could more easily change the routing through the cortex from indirect while learning to direct when learned.

However, to return to the question of human thinking, we are saying that this is represented by a classification system that is initially a growing system where con- nections are made as the organism matures, and that these connections are modified in their detail as a function of early experience. When maturity is reached it is sug- gested there are partitioned partial classifying systems which can still be modified by further learning only within the framework of that already wired partial partitioned system. In general it may be supposed that original ideas and creative activities of one kind or another are now dependent upon the full realisation of all the trees and branches of the classification system. It is natural, furthermore, to assume that the frontal lobes house these higher trees and branches of the classification system.

Naturally also we should be inclined to assume that these higher reaches of the brain are closely concerned with symbols and signs and verbal labels in general and in their relation to the nervous representation of empirical experience. Let us at this point summarise our sketchy model in a few brief statements.

1. The brain is assumed to be represented by a growth model while maturing, but a fully connected model after maturity.

Page 51: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

42 F. H. G E O R G E

cell assembly. However, even this degree of accuracy is modified, as we have mention- ed, by the fact that the central organization must modify the sensory output from the start, even to the extent of influencing what we select from our environment - and this suggests such theoretical terms as set and attention.

We shall think of our central organization initially as a classification system and one that is self-organizing, although pre-wired, therefore in neural net terms a whole set of models suggest themselves (George, 1961). It should be noticed that the sort of model that is easily drawn up to simulate our needs is probably uneconomical. That it will in fact be more than a complete classification system. This leaves the doubt as to whether or not we can afford a complete classification system for the conceptual processes of the brain.

Next we must mention the important point that it is impossible to construct a partial self-organizing classification system in the present neural net terms if it is pre-wired and also needs to be more economical than complete classification. Chap- man’s method (1959), which is somewhat similar to Milner’s Mark 11 cell assembly, supplies a modification by which thresholds are changed when a particular combi- nation of inputs (or cells) fires. This leads to a partial classification system which can only be economically achieved by assuming the property of change of sensitivity of synapses as a direct function of stimulation. We may find ourselves driven to this form of model but will avoid it as long as possible since we wish for various reasons to preserve our neural net schema.

It should be clear that a neural net such as that in Fig. 1 can easily perform the desired function. This, of course, is only one of a vast number of possible nets. It is trivial that this sort of thing can be achieved, but it is important that by neural nets alone a very large number of cells would be necessary.

The alternative, of course, is to consider cells whose fibres are not all connected and whose connections are set up by an actual growth process. We shall assume such a growth, probably electrochemically controlled, does take place during the formative period of the human being, from conception to maturity, but that subsequently the system remains anatomically fixed, and changes only functionally. Now consider for a moment the job that our conceptual system must perform. Thinking (Bruner et a]., 1956) can be constrained to the collection of classified evidence, so that either con- junctive or disjunctive hypotheses suggest themselves. This can easily be catered for

C

Fig. 2. A simple neural network, showing the associations (and counters) needed to simulate hy- pothesis-construction.

Page 52: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F I N I T E A U T O M A T A 43

in a machine type of system (Price, 1962); so that Fig. 2 shows how the suggestion of a hypothesis may occur within the machine model.

Thinking is not thought to be wholly defined by the Bruner et al. type of experiment, but insofar as it covers much of human problem solving and hypothesis making, it can easily be simulated in nervous nets. This requires no more than sets of counters to count the conjunction or disjunction of known properties. The net needs to be extended in two ways: the first is to relate the conceptual classificatory net to the motor system, and the second is to integrate these partial classificatory nets into extended branches, which mix the information from each partial net. The extended branches are to allow for further hypotheses or greater and greater generality. Also, of course, we may expect to find conditional probability computing available between the counters of partial and mixed classification systems.

Now we must explain some further considerations of dimensions at this point. Complete classification is hopelessly implausible in the human sensory system if taken over all possible inputs together. However this leaves two important loopholes for our central classifying system: (a) that the classifying system may be partitioned; that is, the classification may be assumed to be carried out in phases where, for example, a, b and c may be classified in phase I, and d, e and f at phase 11; (b) that the outputs together can be classified separately. This renders a great economy to anything like a complete classification system, and one that is obviously necessary in the case of the various sensory modalities which are necessarily partitioned into visual, auditory, etc.

We are interested in the fact that a set of counters which are constrained to count the occurrence or non-occurrence of properties is all that is needed to set up the basis of the conjunctive or disjunctive types of hypothesis so common in human problem solving. Such counting devices can supply the basis of all logical arguments that have- ostensive bases or are capable of being reduced to simply named properties such as red, round, square, etc. From this we shall say that the classification system is con- structed of counters, and the system itself is partial and partitioned. Now the vital point is that language (or symbols, signs, etc.) forms part of this classification system, and the higher level of this hierarchy is made up of counters for symbols and signs only, the point being that our eventual abstractions in thinking are in words, and the arguments used are linguistic, and furthermore this does not rule out the relation between words and concepts. This is now a one-many relationship, reading from the top of the branches down through the classificatory tree.

Language is thus to be conceived of as being related to concepts in much the same way as instructions to numbers in a digital computer. Words and concepts are, as it were, cross-indexed and words themselves are represented in classificatory systems with counters.

Now we are concerned with neurophysiological plausibility and this actually sug- gests a system whichis conceptually accurate and still at least couchable in neural net terms is the following.

In the initial stages of growth we shall assume that fibres grow together as function of two variables: (1) the genetic code which prescribes the types of connection which would occur if no experiential factors intervened, and (2) the effects of experience on References p. 52

Page 53: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

44 F. H. G E O R G E

the system. This means that learning is probably vital in determining what the ultimate structure of the nervous system is.

It should be made clear that growth is influenced by genetic coding as far as the chemical detail is concerned, and we have agreed that the chemical detail - at this level of description - is less important than to understand the actual flow and storage of information. Furthermore, growth is a matter of directed replication and organi- zation. This can be simulated in neural net terms, if we accept only the fact that replication of subsets of neural nets is permissible. Work along these lines is now taking place at Bristol (Smith, 1962).

After growth has finished - or roughly finished granted some small further amount of growth and regeneration - we may expect the learning to be the type of learning seen in a pre-wired system.

But even when growth as such is complete, we may still expect learning to take place by a variation of the sensitivity of the system. This is where we must look for a partitioned classification system which has come about as a result of self-organization - as a function of experience. Now taking the completed system (representing adult thinking and learning) first. We may expect to find a partitioned and partial classifi- cation system set up, which can be represented by Fig. 3. This gives a typical sample of the three-dimensional array of cells and their interconnections that may occur.

WDRDl WORD2 WORD3

SUB. CL ASS WORD 1 SUE-CL ASS WORD 2 SUB -CL ASS WORD 3.

SUPRACLASS WORD 1 SUPRACLASS WORD 2

Fig. 3. A sample of the tree structure of a partitioned partial classification system, for storing words (and sentences).

I t need hardly be added that facts of cortical overlap suggest that the different cell assemblies (Hebb, 1949; Milner, 1957) involved are themselves a direct function of experience.

We also want though to cater for cell assemblies on many different levels of gener- ality. Thus we also wish to find a special place for the use of signs and symbols which play a vital part in human thinking.

Now what we have in a schematic sense is represented in Fig. 3. This sort of family tree organization allows any particular concept to be stored in

a position with a whole set of immediate associations. Stimulate the concept tree in the above figure and various other stored concepts ranging from woods, oaks, elms, etc. down to particular woods in particular countries are the range of elements also aroused. Indeed they may not all reach threshold level, but they are still capable of

Page 54: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

FINITE A U T O M A T A 45

doing so. The factors that determine conditions for arousal will at least include : (1) the time of the next stimulation; (2) the urgency with which a response is needed; (3) the state of ‘awareness’ of this system when stimulated; and so on.

Fig. 4. A neural net model which shows a simple association net, with appropriate selective re- inforcement. Thus A or B alone will fire A B as a function of the weight of previous (relatively short

term) experience.

Furthermore where signs exist, so that not only the concepts are stimulated, the labels (symbols, signs, etc.) associated with the concepts are also aroused. Linguistically we may expect that words are stored like concepts in a many-one relationship with other words. Indeed a word may occur at the confluence of a store in which the various cross-store items are the range of sentences in which the word might occur. We are now though getting too remote from the problem of neural modelling.

Let us ask ourselves a vital question that has frequently been asked before. Does the brain have specific locations for particular information or does it use general registers like a general purpose digital computer? Is it thus capable of taking in any information anywhere and then replacing it with something else? Or does it indeed work to some extent on both systems?

We shall suppose the latter, that it works both ways. Thus the areas of the cortex are specific but overlapping. What is stored in the general store is general and yet interlocked with particularities, so that similar things are evoked by the same stimu- lation. The principle of association clearly works here, leaving little doubt that like things are aroused by any particular stimulation. Thus electrical stimulation of the brain (Penfield and Rasmussen, 1950) characteristically produces related events at related spatio-temporal points. Furthermore there is much evidence for functional overlapping in the cerebral cortex (Liddell and Phillips, 195 1).

We may assume than that, while reverbatory circuits occur in the nervous system and - as the work of Burns (1958) makes clear - a small amount of stimulation can produce a mass of nervous activity, the storage of events in the brain is to some extent References p . 52

Page 55: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

46 F. H. G E O R G E

dynamic and dependent upon such things as the relations between electrical potentials at different points where the synchronization of firing is a vital factor in a stimulus eliciting a response.

At the same time storage must also involve the setting up of connections between the sensory and motor system. While talking of a central classijication system to simulate the process of thinking and especially abstract thinking, we are reallv talking again of density of connection or sensitivity to inputs and outputs. It should again be mentioned (Stewart, unpublished) that the emphasis is now swinging away from the idea of a sensory control of the brain’s activities to motor control. The motor oper- ations are feeding back in the brain in a homeostatic manner and modification of inputs occurs right at the moment of their first impinging on the human sensory system. This simply means that the brain is not just an input-output system but a graded control system with emphasis much more on output than input. This empha- sises, if we may be pardoned for using this sort of terminology, the searching and purposive aspects of human brain function.

To put this all into cell assembly terms, we are saying that cell assemblies are the method of storing that is the most important to the human being, and this is a self- modifying process which works selectively and progressively short-circuiting the number of cells involved in a particular conceptual association say, as the person passes from learning to having learned. In neural net terms this implies the use of cells that are not specific to a particular set of items learned, and this can be catered for in a pre-wired system (the principle illustrated by the net of Fig. 1). It must be ad- mitted that the method does not immediately commend itself on grounds of neuro- physiological plausibility. Resisting once more the temptation to abandon neural networks, it can be guessed that, neurophysiologically speaking, more plausible nets can be drawn, but the temptation is to accept the Chapman type of growth net, which although pre-wired is self-organizing and could more easily change the routing through the cortex from indirect while learning to direct when learned.

However, to return to the question of human thinking, we are saying that this is represented by a classification system that is initially a growing system where con- nections are made as the organism matures, and that these connections are modified in their detail as a function of early experience. When maturity is reached it is sug- gested there are partitioned partial classifying systems which can still be modified by further learning only within the framework of that already wired partial partitioned system. In general it may be supposed that original ideas and creative activities of one kind or another are now dependent upon the full realisation of all the trees and branches of the classification system. It is natural, furthermore, to assume that the frontal lobes house these higher trees and branches of the classification system.

Naturally also we should be inclined to assume that these higher reaches of the brain are closely concerned with symbols and signs and verbal labels in general and in their relation to the nervous representation of empirical experience. Let us at this point summarise our sketchy model in a few brief statements.

1. The brain is assumed to be represented by a growth model while maturing, but a fully connected model after maturity.

Page 56: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F I N I T E A U T O M A T A 47

2. Learning during growth modifies the partial and partitioned classification system that emerges at maturity.

3. The brain operates by setting up cell assemblies roughly in the Hebb-Milner manner, except that the cell assemblies at the conceptual level are thought to take the form of a partial partitioned classification system.

4. The brain is to be thought of as a representational system for controlling the homeostats, which directly control much of human behaviour and providing a system for conceptual thinking.

5 . Although language may not be necessary to thinking, it is assumed that language is the vehicle for actual human thought, and is necessary to the highest level of thinking. At our highest classificatory levels, symbols, words, etc., occur alone.

The neural facts of the human brain, insofar as they are represented by current neurophysiological research, are not inconsistent with this model, and if we bear in mind the overlapping of networks which are multiplexed (Von Neumann, 1953) and where neuron pools exist that may be part of different cell assemblies at different times, the idea becomes fairly clear and not altogether implausible.

But it is clear to the reader that the attempt to state the model in neural net terms has not actually been carried through in detail. This is by no means impossible to carry through, but extremely difficult. It is literally nearly impossible to draw up such a network. One alternative which will certainly be given a great deal of consideration in the future will be that of producing a ‘seed’ and designing its capacities so that such conceptual models as we are describing will ultimately appear from the seed. This we have mentioned earlier in terms of growth (Smith, 1962). Alternatively, we have a method - and this is our method for testing our model - which we shall describe in section IV.

Ill . M O T I V A T I O N A N D T H E C O N C E P T U A L MODEL

The model we have outlined so far suffers from the obvious defect that it does not, in any sense, take care of the problem of selective reinforcement.

Tt is well known that a motivational system exists in human beings, and indeed in all organisms, so that needs are recorded and the need-reducing system is certainly a part of what energises the conceptual activity of the nervous system. This energising activity is transmitted to the nervous system directly from the organism and either initiates searching behaviour, whether the search be external overt search or an internal search through storage, or will be involved in all types of conceptual activity. Thus it is necessary, for example, that hunger should start with the peristaltic movements of the stomach which are recorded in the central nervous system, leading to searching activity or decision taking as regards food. It may be that thoughts about food as a result of conversation or the like, may lead to an ‘awareness’ of the need existing, in other words, one may come before the other or the other before the one. Built on this basis of selective reinforcement we have decisions still to be taken as to where to eat, and what to eat and precisely when to eat, as far as the hunger drive is concerned, and the same sort of decisional questions arises when any other drive is involved. It References p . 52

Page 57: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

48 F. H. G E O R G E

should be said right away that we are not saying that all behaviour must necessarily be motivated, although there is a great deal to be said for this point of view. We are, in fact, saying that the bulk of behaviour involves change in the state of the organism and this generally means a change of needs, i.e. a change of motivational state. But there is a limiting case where the motivational change may be zero or so slight as to be utterly unimportant. What is important, however, is that the motivational system acts as a selective reinforcer and interacts with the conceptual system at all levels.

We are assuming, what is now generally regarded as well validated, that the limbic system combined with the reticular formation is a primary energiser of the conceptual system. This means that we are saying that the limbic system and the reticular for- mation are the source of motivational information stemming from the rest of the organism. This fits into our picture so far given, that the brain is a system designed to modify the homeostatic controls which are energised by the motivational system.

The point now is that even abstract thinking, conceptual thinking, and the like are subject to motivational changes, although somewhat more indirectly than the sort of learning activity or learned activity that is closely associated with searching for food when hungry, say.

With respect to motivation we shall now consider a theoretical term TOTE (test- operation-test-exist). This is a concept which has been suggested to take the place of the rather simpler notion of selective reinforcement suggested by such writers as Thorndike and Hull and the various behaviourists writing on these matters over the last forty or fifty years. The idea is that the TOTE operation places emphasis on an active, purposive organism, that controls the stimuli to which it is sensitive, and upon which it acts. It does bring out well the fact that the organism is no longer being thought of primarily as a sensorily controlled system, but a t least partly as a motor controlled system. It is, in some respects, like the change in emphasis which occurs when passing from Hull’s theory of learning to Tolman’s theory of learning. The difficulty with this is though that while it is an improvement on a rather passive concept of a need being set up, and this need being reduced by suitable sensory stimuli, it is still quite inadequate to portray the whole of what is involved in the selective reinforcement operation.

It is quite true that with inputs one looks for the simple test of congruence and in- congruence, and if congruence is achieved, we say that the need is satisfied (the search is over). At the same time one has to face the facts that due to long delays in behav- ioural activities - and this itself suggests of course, the presence of a fairly complicated storage system - the search for congruence is not generally satisfied by any simple event. In fact the TOTE operation is one step towards the generalization of simple reinforcement theory. And this, itself, now has to be generalised a great deal further. The generalization we are suggesting is that when congruence occurs, which is rela- tively rare, a certain conclusion is drawn, and this no doubt represents a change of state of the:neural network, a change that ramifies throughout the branches of the central classification system. This implies a degree of confirmation or disconfirmation in any one congruence or incongruence. Furthermore, the ‘being learned’ section of our classification system (where probabilities are not equal to unity or not greater

Page 58: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F I N I T E A U T O M A T A 49

than some value K) is modified as a matter of degree. But the vast bulk of activities performed by, organisms are such that congruence does not occur; at the best, partial congruence will occur, which is what is implied by such terms as degree of factual support, or degree of confirmation. This confirmation when obtained (congruence) gives further strength to a whole set of cell assemblies arranged in hierarchical fashion. In other words the motivational system, acting as an energiser to the conceptual system, is interacting with it all the time and is never wholly satisfying (or granting congruence to) any activities of the conceptual system. This means that we regard any such simple theoretical term as TOTE as quite inadequate to describe the full range of activities of the conceptual system.

We shall not pursue this matter any further here, but the most important point that we should make at this stage is to say that there is now a great deal of evidence about the neurological equivalent of the obviously necessary operation of motivation. We are not concerned, and cannot be concerned, with the intricate details of the inter- action, this indeed will take some time to unravel. It was originally intended that 1 would in this paper, give neurophysiological evidence both as to the interaction of the motivational and the conceptual system and also as to the working of the con- ceptual system itself, but this is clearly impossible in the space at my disposal. There- fore it will be sufficient, at the moment, if 1 point out that the principles on which interaction between motivation and conception takes place are at least basically clear.

Neurophysiologically, it is difficult to supply evidence that conflicts with, or con- firms, what has here been hypothesised. The idea is that there are visual areas (area 17), and other sensory cortical areas, that receive information from specialized receptors and partially classify it (visually, areas 18 and 19 are clearly involved). There are specific speech areas, which interact with visual and auditory stimuli and are again thought to classify input information. The temporal-parietal areas are almost certainly part of the next stage classification of information from the separate sensory classifi- cation systems. The frontal lobes are concerned with the highest level branches and we still, of course, have the motor areas, and a great deal more internal representation in the cerebral cortex. These nets are clearly interlaced in an extremely complex manner.

IV. T H E T E S T I N G O F T H E M O D E L

So far we have outlined the principles upon which we would like to construct con- ceptual models of the nervous system, and in spite of a different intention, it is clearly impossible to actually construct, even using neural nets, a detailed and precise model. This we had anticipated in part, and as a result had also considered how best to test the sort of theories or method of theorising that we were proposing. On one hand, one would like to say, and it is true, that neural nets are themselves precise and can be constructed in such a way that they can be directly tested. However, in practice the difficulties are very considerable. It is difficult to actually draw out the necessary networks in detail, and it is probable that this is not the most effective way of putting the theory to the actual test. Therefore, we shall claim for our neural nets for the ReJ?rcnces p . 52

Page 59: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

50 E. H. GEORGE

moment, that they are in principle an effective way of describing a particular neural theory. This leaves us with the problem of trying to decide how we could put the theory to a more precise test. Such a test is clearly necessary, and it implies that we must somehow compare the functions of our conceptual nervous system with actual nervous systems. In fact we want a precise blueprint, which can be checked against the best available physiological evidence. For this purpose we propose that the general purpose digital computer should be used. Therefore we must construct a programme for the digital computer which is effectively equivalent to the structure of the neural net that we have implied. Here we meet a difficulty, namely that we must now prescribe in detail the very neural net that we felt was very difficult to prescribe in detail. This means that we have to search further for some sort of effective equivalent for our neural net. This, in practice, turns out to be a perfectly tractable problem. We can attack this problem in a variety of different ways, but what is suggested is that the actual anatomical organization of the neural net need not be reconstructed or simulated in the actual digital computer programme. What is vital is that the function of the neural net should be simulated in the digital computer programme. This means that we can concentrate entirely on a matrix which is con- cerned with inputs, outputs and storage elements and the way they can change as a function of the inputs and outputs utilised in the machine’s actual history. The output, of course, is a function both of the input and the storage, and the storage is a function of the input. It would be possible to regard this purely as an input/output problem, but it is convenient in practice to utilise counters, or storage elements, and reproduce the cumulative effect of the input over some period of time, which is stored, as this is the basic function on which the output directly depends. In this wad we can build up a logical equivalent of the physiology of the conceptual nervous system, as opposed to its anatomy. Indeed, should the anatomy be needed, the anatomical details, as well as the functional or physiological details of the system could be in- cluded in the computer programme. The difficulty here is that the inclusion of the anatomy uses too much of the storage space of the computer, and reduces the kind of problems that the complex can deal with too close to the level of the trivial. This means too close to the level where we can anticipate with pencil and paper what the machine would do without having to bother to put it on the computer in the first place. This particular situation we need to avoid at all costs. What we are looking for is a means of investigating input/output relationships, via a particular type of organization (our simulation of the human brain), where the input and the internal organization is too large and too complex for anyone easily to see how they will interact to produce what sort of output. This indeed, is the very essence of our whole testing procedure.

Still we are faced with a great deal of difficulty in actually carrying out the pro- gramming. However, we are in a position to enlist the help of automatic coding procedures, which allow us to state in relatively simple terms that internal organization our system should have, and the computer then, itself, will transpose this relatively simple statement of organization into the appropriate computer programme. Some efforts have already been made in this direction at Bristol to incode conceptual

Page 60: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

F I N I T E A U T O M A T A 51

nervous systems and test them for particular input sequences with an eye to evaluating the outputs. Enough has been done, even at this early stage, to make us realise that such methods are quite practicable.

It is hoped that enough has been said in this article to suggest a methodological approach to neurophysiology, which is relatively novel and worthwhile. The point is this. At the moment it is quite impossible to evaluate seriously models of the nervous system which are sufficiently complex and realistic. We have suggested in this paper, in very general terms a model that depends on a ‘pre-wired’ system of neural nets. It is possible that this particular model is inadequate in sofar it is ‘pre-wired’, and quite certainly it lacks generality since we have already suggested that the nervous system must be represented by a growth system during the process of maturation. However, viewing the adult nervous system, the ‘pre-wired’ form of model may well be sufficient, certainly it is sufficient to tell us a great deal more than we know already about the implications of certain neural models we construct. There is no special difficulty in extending the method of programming, used here for testing, to growth nets - nets where cells actually become connected as a result of their activity, cell A firing at the same time as cell B, leading to AB having a connection which it had previously lacked. This means that the method of testing is general, although the method of neural nets, suggested as a basis for a programme, might well lack generality. The difficulty here is that whatever method we use for constructing our model, at least in a preliminary sense, prior to putting it into the form of a programme, it must have neurological verisimilitude. This requires that some sort of neural net procedure must be used, even though the actual details of our neural nets currently used may be wrong in certain particular detailed respects.

The idea has been that we must settle certain questions of principle, and general model-making principles, before we can make models which have any reasonable chance of success. We have already tried to decide the question of growth and ‘pre-wired’ forms of models. We can also decide to use certain principles, such as classification, and hierarchical organization, and then, at the neurological level, we can accept such principles as fractionation, convergence, and so on.

The point is that we cannot reasonably embark on large scale modelling, either in terms of building a special purpose computer, or by constructing massive programmes for computers, unless we are reasonably assured that at least the fundamental princi- ples of construction are correct. In this paper I have tried to say something about the construction principles, and at the same time give some indication of the actual model-making and programming that is being undertaken by us at Bristol.

S U M M A R Y

We have defined an automaton as a finite set of elements satisfying certain specified rules, and it has already been shown that such automata can be constructed in various media to perform various intelligent tasks, tasks of the type normally associated with human beings, and especially with such human cognitive faculties as thinking.

The kind of automata discussed in this paper is the logical net type, which represents References p . 52

Page 61: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

52 F. H. G E O R G E

a sort of idealized or conceptual nervous system, and our main problem is to show the extent to which these automata can be made to fit the facts discovered by neuro- physiologists and neuro-anatomists.

An outline is given of certain logical nets and suggestions are made with close resemblance to different areas of the cerebral cortex. In particular the attempt was made to build on the Cell Assembly of Hebb and the Mark I1 Cell Assembly of Milner and hypotheses for a conceptual nervous organization for at least a part of the brain were outlined. Furthermore, some general suggestions were made to try to bring the conceptual nervous system more into line with the existing empirical facts.

Consideration was also given to the appropriate testing of the sort of hypothesis which was suggested here. This in turn was seen to imply the possibility of using a digital computer and some suggestions were made in very general terms about its programming.

R E F E R E N C E S

BRUNER, J. S., GOODNOW, J. J., AND AUSTIN, G. A., (1956); A Study of Thinking. New York, Wiley. CHAPMAN, B. L. M., (1959); A self organizing classifying system. Cybernetica, 2, 152-161. GEORGE, F. H., (1961); The Brain as a Computer. New York, Pergamon. HAYEK, F. A., (1952); The Sensory Order. Chicago, University of Chicago Press. HEBB, D. O., (1949); The Organizafion of Behaviour. New York, Wiley. LIDDELL, E. G. T., AND PHILLIPS, C. G., (1951); Threshold for cortical representation. Brain, 73, 40. MILNER, P. M., (1957); The cell assembly. Mark 11. Psychological Review, 61, 242-252. OSGOOD, C. E., AND HEYER, A. W., (1952); A new interpretation of signal after-effects. Psychological

PASK, A. G., (1959); Physical analogues to the growth of a concept. Mechanization of Thought

PENFIELD, W., AND RASMUSSEN, T., (1950); The Cerebral Corfex of Man. London, MacMillan. PRIBRAM, K. H., (1960); A review of theory in physiological psychology. Annual Review of Psy-

PRICE, W., (1962); Machines That Think. Privately circulated and obtainable through the University

SMITH, M. J. A. A,, (1962); Design for a Zygote. Artorga Communication, 41. STEWART, D. J., (1959); Automata and Behaviour. Thesis. Bristol, University of Bristol. STEWART, D. J., (unpublished); A Model for the Brain. UTTLEY, A. M., (1955); The conditional probability of signals in the nervous system. RRE Memo-

VON NEUMANN, J., (1953); Probabilisfic Logics and the Synthesis of Reliable Organisms from Unreliable

Review, 59, 98-1 18.

Processes. NPL Symposium.

chology, 11.

of Bristol.

randum, No. 1109.

Coniponents. Pasadena, California Institute of Technology Publ.

Page 62: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

53

Interpretation of the Mechanism of Conditioned Reflexes lnhibiting the Activity of a Functioning Organ

ALDO N I G R O *

The interpretation given by Bykov (1958) of the mechanism of the cerebral cortex controlling functioning organs seems neither satisfactory nor a definitive one, even to the proposer himself.

The research work of Bykov’s school provided many useful data about cerebral cortex control on inner organs. Referring to experiences and considerations of different authors (Ivanova, Vvedenski) about the phenomenon of the inhibiting action of a conditioned stimulus on an active organ, Bykov says* * : “Gli impulsi nervosi, agendo sulle cellule attive delle ghiandole salivari e provenendo dalla corteccia attraverso i nervi efferenti, portano l’eccitamento delle ghiandole salivari ad un minimo livello ; si puo’ supporre anche (per quanto in questo caso vi siano elementi di arbitrarieta’) che attraverso la corda del timpano alle ghiandole salivari e attraverso il vago allo stomaco ed alla colecisti, giungano continuamente impulsi che si possono chiamare tonici, i quali mantengono l’attivita’ dell’organo in funzione ad un determinato livello : la estinzione di questi impulsi in seguito a1 manifestarsi nella corteccia di un process0 di inibizione provoca allora il rafforzamento del lavoro delle ghiandole. I1 rafforza- mento di questi impulsi, in seguito alla stimolazione della corteccia, provoca invece inibizione ancora piu’ completa”.

For the time being there is no intention of submitting the results of the Russian scientist to a critical revision; also because this would firstly involve an experimental elaboration that has not been carried out at present. But it is considered useful and possible to explain the experimental results achieved by Bykov and his collaborators in a different way. Therefore it is thought useful to put forward a new interpretation which, at the present time, is regarded as a theoretical one, and which rests not only

* Present address: Via Garibaldi, Messina (Italy). * * The nervous impulses, working on the active cells of the salivary glands and coming from the cortex by means of the efferent nerves, induce excitation of the salivary glands at a minimum level. It may also be supposed that, as far as there are arbitrary elements in this case, the tympanic cord and the vagus nerve carry a continuous flow of impulses to the salivary glands and the stomach and colon respectively. They may be regarded as tonic impulses, which maintain the functional activity of the organ at a certain level. The extinction of these impulses following the manifestation in the cortex of a process of inhibition would thus provoke a reinforcement of the work of the gland. How- ever, any increase in these impulses following stimulation of the cortex originates an even more complete inhibition [translated].

References p. 58

Page 63: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

54 A L D O N I G R O

on the experimental observations of Bykov’s school but also on an original way of conceiving the whole mechanism of conditioned reflexes.

The action of a conditioned stimulus, causing a reaction of the organism ordinarily due to a stimulation of a different type and at another site, has had various inter- pretations; yet its actual mechanism has not been ascertained. It is believed that a new working hypothesis can be advanced, which bears some analogies with the cy- bernetic interpretation.

According to this hypothesis, a conditioned stimulation would be efficacious if it could put into motion a particular neuronic circuit where a certain unconditioned reaction is recorded which has already manifested itself in a close connection with the conditioned stimulation (Fig. 1). When a conditioned stimulus reaches the supra-

Fig. 1. Mechanism of the conditioned reflex. The conditioned reflex has a complex mechanism based on the memorizing property of the supra-axial structures. The association of a conditioned stimulus with an ordinary unconditioned reflex gives rise to a particular pattern in which the impulse from

the conditioned stimulus and the unconditioned reaction are associated. Key to diagram: E, effector; R1, receptor (of unconditioned stimulus); GC, ganglion cell; SSN, spinal sensory neuron; CSN, cortical sensory neuron; CMN, cortical motor neuron; Rz, receptor (of con-

ditioned stimulus).

axial structures, it remains active and circulating over a certain period of time in the reverberating circuits of such structures. If in the mean time the organism starts to react to the conditioned stimulus, the aforesaid conditioned impulse turns towards the active centre and a pattern is then deposited ‘in the memory’ with the representation of the ordinary neuronic circuit that is responsible for the unconditioned reaction as well as of the impulse originating from the conditioned stimulus.

The impulse originating from the stimulation of a teleceptor cannot be utilized immediately - for lack of a direct connection with an effector - and therefore it

Page 64: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M E C H A N I S M OF C O N D I T I O N E D R E F L E X E S 55

cannot give place to a unique and isolated reaction; but it remains ‘in store’ for a certain time. This is possible, for the impulse finds in the cerebral cortex structures some appropriate dispositions which allow the impulse itself to circulate indefinitely. This is in fact expressed by a well-known neurophysiologist, Infantellina* : “La scarica asincrona che accompagna la deflessione positiva della risposta del lembo (alla stimo- lazione elettrica isolata) e’ dovuta all’ attivita’ dei neuroni a corto assone degli strati profondi della corteccia cerebrale : l’azione circola in circuiti funzionali chiusi com- posti dalle cellule e dai loro dendriti ed assoni. In una siffatta rete di elementi con- duttori, l’eccitamento puo’ circolare per un numero di volte teoricamente infinito, se il tempo di refrattarieta’ degli elementi eccitabili e’ di durata minore del tempo necessario perche’ l’eccitamento possa percorrere l’intero circuit0 neuronico”.

The utilization of the afferences on a cortical level occurs by means of the orientation caused by the most active centre, since the information received is multiple, while the possibility to react is limited (phenomenon of the common way). An example of this is found in complex epilepsy, where the initially more active centre brings all the others to a convulsiveactivity-and this the easier, the less active the latter (as occurs during sleep).

In a subsequent application, when a well-defined and stable pattern has formed in the memorizing structures by means of an adequate number of repetitions, the con- ditioned stimulus is capable of giving a ‘start’ to the previous scheme, thus evoking the so-called conditioned reaction. In such a state, in fact, as soon as the conditioned stimulus presents itself, it is no longer unknown to the supra-axial structures; but when compared with the ‘memory’ of it kept in the memory repository, it is ac- knowledged as prodrome of a situation which has already occurred. The organism is then in a position to anticipate the action of the unconditional stimulus, and to react accordingly.

Fig. 2. Control of efferent impulse: first stage. The motor neuron discharge results in a stimulation of the Renshaw (R) cell, which serves to depress activity in the neuron itself.

The inhibition of efficiency in the conditioned stimulus, due either to a new outer stimulus or to an inner change of the organism, brings no contradiction in our ex- planation, for it is obvious that any new situation - since it has not yet settled in the ‘memory’ - cannot evoke the already existing pattern.

* The asynchronous burst accompanying the positive deflection of the answer of the limbus, during isolated electric stimulation, is due to the activity of neurons with short axons in the deep layers of the cerebral cortex: the activity circulates in functionally closed circuits formed by cells and their dendrites and axons. In such a net of conductive elements the excitation can theoretically circulate an infinite number of times, if the refractory time of the excitable elements has a shorter duration than the time needed for one complete circulation of the excitation through the neuronal circuit [translated]. References p . 58

Page 65: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

56 A L D O N I G R O

Even when dealing with the inhibiting action of a conditioned reflex on an active organ, we think it possible to apply such an explanation. While it receives an indirect confirmation of its actual possibility just because of such a peculiar case, which is

Fig. 3. Control of efferent impulse: second stage. As soon as the effector (E) starts functioning it produces an afferent discharge which becomes more intense through the stronger action of this

effector and reaches the motor neuron with depressive consequences.

hardly clear at first, it is, at the same time, capable of supplying some useful explana- tory elements with reference to a mechanism that has remained rather obscure.

Under such conditions the association between a conditioned stimulus and the pattern formed takes place regularly. Its peculiarity resides in the pre-formed pattern which becomes a pattern during the action, and which, therefore, lacks initial ex- teroceptive information. The absence of such information, whereas proprioceptive information is present and tends to oppose itself to the prosecution of the action (see

Fig. 4. Control of efferent impulse: third stage. As soon as the supra-axial centre starts functioning it involves the reticular substance (Ret.) which inhibits the action of this supra-axial centre through

a complex mechanism. (Abbreviations as for the preceding figures.)

Figs. 2, 3 and 4), may explain the finding that the functioning organ stops its own activity under the effect of the conditioned stimulus (Fig. 5).

Afferences are indispensable for stimulating the organism to activity: this has been amply demonstrated. The lack of such afferences corresponds to the opening of a

Page 66: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M E C H A N I S M O F C O N D I T I O N E D R E F L E X E S 57

circuit and this means the abolition of the function of the circuit. Yet it must be observed that afferences are integrated at different levels : when no action is taking place the afference does not stop at an axial level, but reaches even supra-axial levels. Thus the organism is informed of the presence of that element; and it can assess its

Fig. 5. Mechanism of the conditioned reflex inhibiting the activity of the functioning organ. As soon as it succeeds in producing the reaction of the effector (E), the stimulation of the ordinary receptor (R1) does not create any informational impulse for cortical centres which are under the action of the proprioceptive impulses originating from the active state of the effector. Under such conditions the stimulation of the new receptor (Rz) associates with a pattern lacking any exteroceptive infor- mation. Key to diagram: SSN, spinal sensory neuron; SMN, spinal motor neuron; CSN, cortical

sensory neuron ; CMN, cortical motor neuron; PSN, proprioceptive sensory neuron.

importance and regulate its reaction accordingly. The afferences that arrive at a supra-axial level, besides serving for the specific function of any afference, also serve to exert a general aspecific influence which maintains the ‘tone’ of the organism. Under ordinary conditions, in fact, axial structures do not react to any stimulation when they are not simultaneously facilitated through centrifugal impulses coming from supra-axial structures. Once a reaction has been evoked by such centrifugal impulses, specific afferences do not reach the cortex any more because they are diverted towards the axial motor neurons, where they find a reduced resistance to their passage: thus, during the action, afferent specific information does not arrive at the cortex; therefore, it is completely absent in the pattern that can be constituted in supra-axial structures by association with the conditioned stin~ulus.

Our interpretation is based on analogies with mechanisms that are used for the control of active organs; also with those guiding the performance of certain pre- arranged processes, introduced into the machine that has the task of effecting a particular operation.

Computers or electronic brains are to-day capable of effecting many operations copying human behaviour. Some of these operations, certainly the less complex ones, are effected by means of data recorded on a magnetic tape and introduced into the Rrfcrences p . 58

Page 67: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

58 A L D O N I G R O

machine, which supply the mechanism with any information needed for a certain work. We suppose that a similar mechanism produces the conditioned reflex.

The organism already knows the operational scheme, for the latter is deposited in the memorizing repository; the initial conditioned stimulus is able to select the corresponding scheme and puts it into operation, similarly to what happens in a juke-box.

S U M M A R Y

An explanatory scheme of the cerebral cortex control on a functioning inner organ is proposed, based on data supplied by the Bykov’s school and on a theoretical inter- pretation of the mechanism of conditioned reflexes, whose action is held possible in as much as they can evoke a pattern that is already well defined in the cerebral structures.

It is held that conditioned reflexes can inhibit the activity of a functioning organ because in the pattern, evoked under such conditions, there is a complete lack of unconditioned afferent impulses.

R E F E R E N C E S

BYKOV, I., (1958); La Corteccia Cerebrale e gli Organi Interni. Torino, Feltrinelli. INFANTELLINA, F., (1957); I fenomeni elettrici della corteccia cerebrale isolata. Relazione a1 IX Con-

gresso Nazionale Societd Italiana Fisiologia. Valsalva, Editrice-Firenze.

Page 68: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

59

Information Processes of the Brain

A. V. NAPALKOV

Moscow State University, Moscow (U.S.S. R . )

The development of cybernetics opens up new ways for the study of the mechanisms which govern brain activity.

One of the most important trends of research in this sphere is the study of infor- mation processes. With the help of experimentation it has proved possible to reveal systems of rules which underlie the processing of information in the brain (algo- rithms) and which constitute the basis of complex forms of nervous activity.

Analysis of the working of a model created on the basis of these algorithms shows that the description of the experimentally revealed algorithms is complete and precise; this clears the way for further investigations of living organisms (Ashby, 1958; Feigen- baum, 1961; Gelernter, 1960; Newell, 1958). This new method gives a scientific analysis of a number of complex forms of brain activity by revealing the system of simple rules which determines the processing of information (for example, the ability to solve problems). The establishment of a system of algorithms allows us to make an approach to the solution of another important question, the structure of the nervous network of the brain. Attempts to analyse the complex forms of mental activity (such as thinking, solving problems, pattern recognition, learning. etc.) in a direct way, on the basis of the theory of nervous networks, often prove to be rather difficult. To make this task solvable, it is necessary first to carry out an analysis of the complex forms of brain activity on the level of information processing by revealing the system of algorithms, and then to elaborate certain hypotheses of the structure of the nervous network which underlies the algorithms. In the latter case it proves possible to elaborate fruitful hypotheses, each of which must subsequently be verified experimentally. Where there is a choice of several hypotheses, a definite form of experimentation can be suggested which would corroborate only one of these hy- potheses. Thus it is possible to make use of the method of scientific investigation for elaborating and verifying various hypotheses : consequently, a complex application of the method of algorithmization and of the theory of the nervous network makes possible a thorough analysis of the mechanisms that control the functioning of the brain and helps to fill the still existing gap between psychological and neurophysio- logical investigations.

We believe that new research methods, engendered by the development of cyber- netics lead to a radical change in the methods which so far have been used in the References p . 48/49

Page 69: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

60 A. V. N A P A L K O V

study of the brain. They enable the researcher to pass from a mere accumulation of facts to purposeful experimental investigations based on an elaborate scientific theory.

In the course of our research carried out in Moscow University we tried to use the above-mentioned methods of investigation, which combine the discovery of algo- rithms by means of experimentation and the creation of electronic models, to analyse the mechanisms of the activity of the brain. An attempt was also made to disclose the nature of the factors which are responsible for the development of the neurogenic stage of the hypertonic disease.

A system of methods aimed at creating models of the external environment was worked out in the course of experimentation. The models included systems of cause- effect relationships between the signals and the actions of the organism. This is illustrated in Fig. 1. The letters A1, Az, An designate various signals, while the

I bll

Fig. 1 . Scheme of experimentation systems. For explanation see text.

letters bl, bz . . . . bn designate various actions of the organism. The letter A de- notes the solution of the task given to the subject, or the obtainment of a reinforcer, such as food, by the experimental animal.

This scheme means that if in the presence of signal A6 the subject or the animal performed movement b3, the experimenter had to switch on signal As. If on the other hand movement b3 was performed in the absence of signal A6, then signal A8 was not switched on. In this way the experimenter could, with the help of his actions, reproduce definite features of the external environment. This method, applied in the course of experimentation, allowed us to take strict account of all the signals, and complexes of signals, which were coming from the outside, as well as of all the motor reactions and of the structure of the information stored in the memory. We were able not only to investigate the reflex responses of the organism, but also to analyse the processes going on in the external environment. It proved possible to establish precisely which changes in the surroundings and which new signals are called forth by certain move- ments of the organism; this was of great importance for analysing the significance of the principles of brain activity revealed in the course of experimentation.

Page 70: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N P R O C E S S E S 61

Thus, the aforesaid methods provide new possibilities for revealing the systems of rules to which the information processing of the brain is subordinated. We could also give a full analysis of the entire cycle of information processing taking place both in the organism and in the external environment, which is important for a scientific analysis of algorithm of brain activity.

Of great significance in the study of the problem of self-organizing control systems is the idea of Wiener (1958) that the complex forms of brain activity are based on a hierarchical system of working programmes. Each of the programmes of a higher level forms, corrects and regulates the programme of a lower level. In our opinion, a scientific analysis of many complex forms of brain activity (which was inconceivable when the old methods were applied) becomes possible through revealing the hierarchy of algorithms at different levels.

Our research in this direction has shown that already at the stage of studying the programmes of the first level a highly complex picture is revealed, a picture which is not reduced to a mere sum of chains of reflex reactions.

For example, our investigations disclose the existence of special control mechanisms which switch on and switch off different sections of the already elaborated and stabilized programmes of the first level (systems of conditioned reflexes); the working algorithms of these control systems were subjected to a thorough study. It proved that separate sections of such already well stabilized systems do not always come into effect. There exists another special system of rules which determine the active or in- active state of definite sections of the already elaborated working programme.

The study of the algorithms of the second, higher level (algorithms of self- organization) reveals a still more complex picture.

Tl-e so-called principle of reinforcement is of great importance in psychology and physiology. This principle is claely bound up with the motivation of behaviour and the problem of the genesis of subpurposes. It ensures the due appraisal and selection of trustworthy and useful information. It was established that this principle manifests itself in a considerably more complex form in a study of the algorithms of the second level.

The elaboration of each new conditioned reflex included in the programme ne- cessitated repeated reinforcements. The experiments disclosed the existence of re- inforcing stimuli of various types which play a different role in the formation of new working programmes. Repeated reinforcements, by means of various types of such reinforcing signals, proved to be indispensable to the formation of a new working programme.

The experiments also revealed the existence of a whole system (structure) of re- inforcing stimuli which ensures a full selection of useful and trustworthy external in- formation. This structure is formed in the course of life of every man and animal, in indissoluble connection with the formation of new first-level programmes of the brain activity.

The significance of each reinforcing signal was determined by its place in the already formed system of conditioned reflexes. In the course of formation of a new programme of brain activity separate signals included in this programme began to play a definite Rifercrm<es p . 68/69

Page 71: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

62 A. V. N A P A L K O V

role in the further selection of information, as well as in the further formation of new sections of the system of conditioned reflexes. Several types of such signals were disclosed. Reinforcing signals of the first type, for example, played an intermediate role; they reinforced the new elements of the programme (the new conditioned re- flexes) only when the entire system was reinforced by say, food (unconditioned re- inforcement). Reinforcing signals of the second type ensured the formation of auto- nomous chains of conditioned reflexes. They did not require any repeated reinforce- ment. On the whole, highly complex and perfect mechanisms ensuring the work of the brain as a self-organizing system are formed. These mechanisms often escape the attention of researchers - physiologists and psychologists - since they manifest themselves only when the integral system including the whole organism-in-the-external- environment is considered.

We disclosed the systems of rules (algorithms) determining which signal and under what conditions may become a reinforcing stimulus of m e type or another. For example, we found that if one of the components of the complex stimulus of a conditioned reflex is switched on, or if a conditioned inhibitory signal elaborated in a chain of reflexes is switched off, this may become a reinforcing signal of the second type. The switching-on of a stimulus which earlier repeatedly preceded the switching- on of the conditioned inhibitory signal may also turn into such a reinforcing signal. In general, a highly complex system of algorithms was disclosed, a system which underlies the formation of the above-described complex of reinforcing stimuli.

A method of simulation was next applied to verify and analyse the disclosed algo- rithms (Braines et ul., 1959~). Investigation of the work ofanelectronic model corrobo- rated some of our suppositions. However, it was also shown that even this complex system of rules cannot ensure the functioning of a self-organizing system in more or less complicated conditions of the external environment, and that the brains of man or animals undoubtedly make use of more complex and perfect algorithms. The defect of the above-described algorithms was that the rate of formation of new working programmes depended on the accidental emergence or disappearance in the environment, of signals which are indispensable to the elaboration of such programmes. Meanwhile, the brain possesses algorithms which ensure the formation of motor reactions purposefully modifying the external environment by causing the required stimuli to appear or disappear. Special experiments were carried out in order to in- vestigate this phenomenon, and these experiments revealed still more complex systems of algorithms. They showed that man and animals possess several systems of algo- rithms which are utilized in their highly complex interaction, Each of these systems has its individual advantages, and its utilization proves to be expedient in well-defined conditions. At the same time there exists an algorithm of a higher, third order which, at definite moments, switches on and switches off the ‘algorithms of learning’ of the second order. Being unable to describe here the entire complex system of algo- rithms we shall cite only one example.

Here the algorithmis revealedin thecourse of experimentation. The following chain of reflexes was elaborated : A4-bs-A1-66-A3-b5-A8-b10- reinforcement. The conditioned reflex As-blO- reinforcement was elicited. ( I ) The subject performed some acciden-

Page 72: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N P R O C E S S E S 63

tal movements (61, 6 2 . . . . bn) and different complexes of movements (bz, b l , b3, b4,

61; 615, bi, b5, etc.). (2) If these movements did not lead to any changes in the external environment, if no new signals appeared ( A I , A2 . . . A), the subject did not repeat the movements already tested. Some time later the motor activity dis- appeared altogether. (3) But if the subject’s movements led to the appearance of a certain indifferent signal (A1, Az . . . . A ) which was absolutely new (i.e. not connected with the reinforcement), the movements that had previously disappeared (61, 62, 6 3 . . . . hZ) recurred. Besides, the subject memorized those movements which had led to the appearance of the new signal, as well as the signal itself. ( 4 ) Thus, a chain of reflexes was formed, on the basis of novelty, and this continued until the chain led to the appearance either of reinforcement or of one of the conditioned stimuli which entered the previously elaborated alimentary system of reflexes (for example A s ) . Then the entire system, elaborated on the basis of reinforcement by novelty, became stabilized at once.

It was, however, established that many superfluous, unnecessary links were included in the chain of reflexes. An algorithm which eliminated this defect was found. I t is based on the following principle. Separate links, one after another, began to ‘fall out’ of the chain of reflexes. If the chain of reflexes was reinforced, these links never re- appeared, but if no reinforcement was used, they returned. Thus, there was observed, as it were, a process of selection of those elements in the system which strictly corre- sponded to the conditions of the external environment.

This study of the algorithms of the brain activity made possible an attempt to approach the analysis of the structure of the nervous network, which lies at the base of these complex forms of the brain activity (Ashby, 1958; Braines and Napalkov, 1959a,b; Andrew, 1959; Braines et al., 1959).

In our opinion, the aforesaid researches show that the application of the method of algorithmization may lead to a marked progress in the study of the brain. Al- though a number of important principles of information processing were revealed already in the past, the absence of a proper general conception and of a method allowing to appraise the degree of completeness of the scientific analysis (the method of simulation), retarded the development of science. The method of algorithmization makes possible a transition from the accidental way of disclosing separate principles (the principle of ‘trials’ and ‘errors’, the principle of ‘efferent generalization’, etc.) to the ascertainment of the entire system of algorithms underlying one complex form of the brain activity or another; it also allows to verify with precision the degree of completeness and trustworthiness of the scientific analysis. Owing to this, the signifi- cance and place of this kind of investigations in the general system of research into the mechanisms of brain activity become quite clear. Besides, conditions have been created for a transition from investigating separate interrelations between the nervous centres to studying the structure of the nervous network as a whole and to elaborating a harmonious theory.

Owing to the application of the method of simulation, all the groundless con- ceptions, which in the past greatly hindered the development of science, must un- doubtedly collapse ; according to Holst, these conceptions led, in the final analysis, References p . 68/69

Page 73: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

64 A. V. N A P A L K O V

to a disbelief in any theoretical research in the sphere of the brain activity and to worshipping ‘pure facts’.

The above-described trend in research is directly connected with the practical so- lution of a number of urgent medical problems. Of decisive significance in this respect is the functioning of the self-organizing control systems. When the work of these systems becomes deranged, new programmes of work are formed which cause and actively maintain the pathological deviation in the control systems. It is the active character of these mechanisms that accounts for the difficulty of treatment. The problem of radical treatment of the afore-mentioned diseases will, apparently, be successfully solved only when we find adequate means of influencing the functioning of the control systems itself. This problem assumes prime importance. But until re- cently the study of the problem has been confined predominantly to ascertaining the influence which is exerted by the stimulation of various nerves and nervous centres as well as by the administration of various pharmacological agents on the activity of individual organs, and to proving the possibility of elaborating conditioned reflexes connected with the functioning of the internal organs. The control factor was often regarded as a mere sum of unconditioned and conditioned reflexes.

In connection with the development of cybernetics it becomes quite clear that such approaches to the study of control processes in living organisms are insufficient (Ashby, 1958; Kilburn et a]., 1959; Gelernter, 1960). In particular, the above-described in- vestigations show that the work of the control systems in living organisms is of a highly complex character and that it cannot be reduced to a mere sum of reflexes. In order to solve the problem of the control mechanisms governing the activity of the internal organs, it is likewise necessary to study the complex system of algorithms of different levels and to analyse the intricate structures of the nervous network. A complex analysis of the information processing which covers both the organism and the external environment is also often required. The application of the above-mention- ed methods of investigation may prove useful in this respect.

We have attempted to solve this problem. We proceeded from the theory of Lang and Miasnikov (1951) (Braines and Napalkov, 1959a,b; Braines et al., 1959c) concern- ing the leading role of psychical traumatization in the development of the hypertensive disease. According to these scientists, a rise in the level of blood pressure evoked by the emotion of fear or rage is an adaptive phenomenon. But as a result of repeated and protracted psychical traumatization, this protective mechanism becomes the fundamental cause of hypertension. We made an attempt to analyse this problem by reproducing the hypertensive state experimentally in dogs. Our first experiments directly reproduced the theory of Lang and Miasnikov. We induced protracted and repeated traumatization by means of creating conditions which could provoke the emotions of fear and rage in the experimental dogs. For this purpose we repeatedly stimulated the animals by an electrical current, suddenly pulled the straps in which they were fastened to the ceiling of the experimental chamber, used flashes of light, fired toy-pistols, etc. As a result of the application of each of these ‘extraordinary’ stimuli, the level of blood pressure in the dogs increased by 30-50 mm. Fifteen minutes later the blood pressure dropped again to its normal level. With the repeated appli-

Page 74: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N PROCESSES 65

cation of these stimuli the resulting increase in the blood pressure was becoming less and less pronounced. After 20-30 of such applications no rise in blood pressure was observed. Thus, the hypothesis, in its initial simple form, was not corroborated. The innate regulating mechanisms proved to be firm, and could not become the funda- mental cause of development of a pathological state in the organism.

We were able to reproduce a biological model of pathological development only when we provoked a derangement in the third-level algorithms of the self-organizing systems. In this case we could observe the formation of new regulating mechanisms which caused a stable rise in blood pressure.

The derangement of the algorithms was obtained in the following way. After a single application of one of the afore-mentioned ‘extraordinary’ stimuli (for example, stimulation by electric current) we fully excluded it from further experimentation and instead used exclusively the conditioned stimuli which previously had been connected with this ‘extraordinary’ signal (for example, a metronome, the flash of a white electric bulb, etc.). The conditioned stimuli were applied repeatedly, at intervals of 3-5 min, without any reinforcement. The first application of a conditioned stimulus resulted in a 30-40 mm rise in blood pressure in all five dogs. After that the intensity of the reaction to the application of the stimulus continued to grow. The sixth and seventh applications brought about a 50-60 mm increase in the maximal blood pressure. This process was going on during the subsequent days as well, and soon the blood pressure rose to 190-230 mm. But even this was not the limit; the hypertensive state of the dogs showed a further increase. For example, during the period of 10 days the blood pressure in the dog ‘Dshan’, which previously maintained itself on the stable level of 130/90 mm Hg, rose under these conditions to 250/160 mm. (The numerator denotes the level of maximal blood pressure and the denominator the level of minimal blood pressure.) In the case of dog ‘Belka’ (Table 1) the following changes in blood pressure occurred.

T A B L E I

B L O O D P R E S S U R E I N T H E DOG ‘BELKA’

Date Blood pressure in mm Hg

27lV 28/V 29/v 31lV 2/VI 3lVI

15/vr I5lVII

160195 1801120 1 801 120 1901125 2001132 2201 I40 22011 40 2201 140

The hypertensive state persisted in the dogs for many months (the dogs were observed for a period of 1 year and 4 months). The blood pressure did not fall even after the dogs had been excluded from the experiments for a period of 5 months (during this period they were kept in the kennel).

An analysis of the factors which caused the development of this pathological state Riferences p . 68169

Page 75: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

66 A. V. N A P A L K O V

showed that it was due to the formation of new pathological programmes of the level of blood pressure. These control programmes proved to include a considerable number of external signals which provoke a protracted and stable rise of the blood pressure. Many of these signals were present in the surrounding medium for a long time, and this resulted in the stability of the pathological level.

Formation of the new control programmes was caused by a derangement in the algorithms of the self-organizing control systems. Under usual conditions the regu- lation of the level of blood pressure is based on a programme which includes both interoceptive and exteroceptive signals (systems of conditioned reflexes). These pro- grammes are formed on the basis of algorithms of the second level; under usual conditions the latter cause the formation of such new programmes of control which are useful to the organism. These algorithms ensure the selection of trustworthy and necessary information. The work of this system rests upon the previously described principles, such as the principle of repeated combination of two signals as the basis of reinforcement, the principle of repeated reinforcement, etc. There exist algorithms of the third level which modify the algorithms of the second level. For example, in the case of a danger threatening the life of the animal, the algorithms change their activity and ensure an extremely rapid formation of such regulating mechanisms which aim at preventing tlie destruction of the organism.

The programmes of the third level which determine this change in the work of the algorithms of the second level play an adaptive role, but in the case of their protracted deviation, they may serve as a basis for the formation of new pathological control mechanisms.

This was precisely observed in our experiments. The usual second-level algorithms of brain activity proved to be deranged in the experimental animals. New pathological systems of conditioned reflexes connected with the rise in blood pressure formed at a very rapid rate. The new conditioned reflexes were evoked after a single combination of a new signal with a conditioned stimulus and the resulting effect, namely the rise in blood pressure, was not extinguished even after 900 applications of this combination without reinforcement. The system of double reinforcements, as well as other algo- rithms (Ashby, 1958; Braines and Napalkov, 1959a,b; Braines et a/., 1959c, 1960) which ensure tbe formation of behaviour, were deranged. This resulted in the elici- tation of conditioned reflexes of the second and higher orders and in the formation of new working programmes (for example, control programmes) without any rein- forcement by an unconditioned stimulus. These programmes were not extinguished in the absence of reinforcement.

The above-described changes in tlie algorithms of brain activity led to the for- mation of a rapidly growing inextinguishable system of conditioned reflexes which called forth a stable rise in blood pressure. An ever-increasing number of signals from the environment proved to be included in this system, and this contributed to the further development of the pathological state. If a single signal of this pathological system was switched on once or twice under conditions of a new chamber which was unfamiliar to the experimental dog, subsequently the dog’s blood pressure in this chamber showed a stable level of 220/140 mm Hg.

Page 76: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N P R O C E S S E S 67

Thus, there formed new programmes of regulation of the blood pressure whose abnormally high level was firmly maintained. The complex of the pathological signals was steadily growing, as a result of which the high level of blood pressure could be maintained during the greater part of the day.

We investigated various means of treating this state experimentally. Only those means proved to be effective which led to a modification of the algorithms of the brain. Otherwise, for example, after the administration of medicinal substances, we obtained only a temporary effect. We established the difference between the effect produced by medicinal substances when they were applied against the background of the action of pathological conditioned stimuli and their effect in the absence of such stimuli. In the first case the therapeutic effect proved to be better.

It must be emphasized that the mechanisms of the above-described pathological state can be ascertained only when the entire system, including both the organism and the external environment, is considered. An isolated study of the organism may not disclose the causes of the disease and will not help to find effective means of treatment.

Thus, the method of algorithmization is of great importance for studying the causes of some diseases. It proves necessary to consider the whole complex picture of the control system and to analyse the changes which take place in the algorithms of different levels. At the same time it is important to take account of the processes going on not only in the organism, but also in the external environment.

Treatment under clinical conditions, where external signals are absent and, con- sequently, no pathological forms of control can manifest themselves, leads to tempo- rarily favourable results. However, when the patient returns to his habitual conditions of life, under which the previous closed pathological system of circulation of infor- mation becomes restored and the working of the pathological systems of control again fully manifests itself, the disease (for example, hypertension) recurs. In the course of treatment it is, apparently, more important to influence the new pathological control system itself than to obtain a temporary decrease in blood pressure with the help of medicamental remedies.

These facts reflect only one, particular case, illustrating certain general principles. An essential role in the development of some diseases (such as angina pectoris,

bronchial asthma and others) is played by the formation of such pathological control systems which include stimulations of the interoceptors as pathological signals, for example, formation of lactic acid in changes of blood sugar concentration. The princi- ples underlying the formation and functioning of such systems are identical with the previously described pathological mechanisms. These pathological forms of control result also from disturbances in the working of the self-organizing systems, which lead to the formation of new pathological programmes for the operation of the control mechanisms.

We studied the formation of some of these programmes for the regulation of the blood sugar level. It was found that under the action of one signal during several hours a complex programme of changes in the blood sugar level was formed. We inves- tigated the laws governing the formation of such complex systems of reflex reactions. Rejcrences p . 68/69

Page 77: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

68 A. V. N A P A L K O V

Self-organizing control systems play a particular role in the development of dis- eases. Whereas a disturbance in the operation of inborn control mechanisms is usually of a limited character and is rapidly eliminated with a change in the un- favourable conditions, a derangement in the working of self-organizing mechanisms leads to the development of new pathological forms of control; these new forms may function in the organism persistently and protractedly even after the disappearance of the unfavourable external influences.

The study of self-organizing control systems assumes great importance for medical science. When treating diseases, it is necessary to take into account the existence of pathological control systems and to be able actively to influence these systems instead of confining the therapeutic treatment to separate secondary factors. When analysing the causes of diseases, it is important to know the algorithms of the self-organizing systems, as well as the character of the derangements of these algorithms which lead to the development of pathological forms of control; this will allow us to find effective therapeutic means.

The achievements of modern cybernetics may be of great advantage to patho- physiology and medicine.

S U M M A R Y

The information processes for the study of the brain especially for the problem of hypertension has been discussed. An account has been given in which an attempt has been made to analyse some complex forms of the brain activity on the basis of a study of the information processes.

The communications is mainly concerned with the presentation of experimental data which are connected with the elaboration of new methods of treatment. These methods, based on the study of information processes were tested on dogs in which hypertension had been induced experimentally.

R E F E R E N C E S

ANDREW, A. M., ( 1959); Learning Machines. Mechanisation ofT/mrght Proceues. London. Stationery

ASHBY, W. Ross, (1956) ; Design for an intelligence-amplifier. Collection Autoniata. Moscow. Publish-

ASHBY, W. Ross, (1958); The application of cybernetics to biology and sociology. Voprosy Filosojii,

BRAINES, S. N., AND NAPALKOV, A. V., (1959a); Cybernetics and some problcms of psychiatry. Col- lection Problenis o f Experiniental Pathology. Moscow. U.S.S.R. Academy of Medical Sciences.

BRAINES, S. N., AND NAPALKOV, A. V., (1959b); Some questions on the theory of self-organizing systems. Vopvosy FilosoJii, 6 , 148-1 54.

BRAINES, S. N., NAPALKOV, A. V., AND SCHREIDER, 1. A,, (1960); Analysis of the working principles of some self-adjusting systems in engineering and biology. Information Processing. Proceedings InternationalConference on Information Processing UNESCO (ICIP), Paris, June 1959 (p. 362-371).

BRAINES, S. N., NAPALKOV, A. V., A N D SVECHINSKY, V. B., (1959~); Problenis of Neurocybernetics. Moscow, U.S.S.R. Academy of Medical Sciences.

FEIGENBAUM, E. A., (1961); The Simulation of Verbal Learning Behavior. Proceerling~ Western Joint Computer Conference, 19, 121-132.

Office (p. 475).

ing House of Foreign Literature.

N12, 110-1 17.

Page 78: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N P R O C E S S E S 69

GELERNTER. H., (1960); Realization of a geometry theorem proving machine. Information Processing.

KILBURN, T., GRIMSDALE, R. I., AND SUMMER, F. H., (1959); Experiments in machine learning and

MIASNIKOV, A. L., (1951); Role of disturbances of the higher nervous activity in the development of

NAPALKOV, A. V., (1958); A study of the laws for elaborating complex systems of conditioned reflexes.

NAPALKOV, A. V., AND SHTILMAN, E. V.,(1959); A study of complex forms of the analytico-synthetical

NEWELL, A., AND SHAW, I. C., (1958); Elements of theory of human problem solving. Psychological

PASK, G., (1961); The cybernetics of evolutionary processes and self-organising systems. Proceedings

PAVLOV, I. P., (1956); Complete collected works. Moscow. State Medical Publishing House. REITMAN, R. W., (1961); Programming intelligent problem solvers. IRE Transactions of the Professional

WIENER, N., (1958); Cybernetics. Moscow. Soviet Radio Publishing House.

Proceedings International Conference on Information Processing UNESCO, Paris (p. 273-282).

thinking. Information Processing, London. Butterworth (p. 303-309).

hypeitonia. Zhurnal Vysshei Nervnoi Deyatelnosti, 1, 99-104.

Vestnik Moskovskovo Universiteta, 2, 75-79.

cerebral activity. Vestnik Moskovskovo Universiteta, N39, 57-83.

Review, 65, N3.

3rd Congress of the International Association of Cybernetics, Namur, 1961.

Group on Human Factors in Electronics, 2, Nl .

Page 79: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

70

Information and the Brain

J I R i Z E M A N

Philosophical Institute, Czechoslovak Academy of Sciences, Prague (Czechoslovakia)

B R A I N W R I T I N G

A unifying process in all fields of human activity including communication by speech and information processing is of enormous importance to the development of society. Apart from well-known attempts at creating artificial international languages, such as Esperanto, there are endeavours, no less significant for modern society, at creating a universal scientific language which would permit closer connection between and unification of different branches of science, uniformity of expression, better infor- mation processing and centralized research. This endeavour has now a promising future thanks to cybernetics, mathematics and the theory of information which, in co-operation with physiology, linguistics and psychology, could produce fruitful work.

Hitherto existing methods were and still are concerned mainly with the formal aspect of speech. Mathematical logic and logical language analysis (semiotics) analyse the logical structure of thinking and language, and create general systems of symbols to express that logical structure and logical operations; however, they disregard the qualitative aspect, they do not consider the contents, nor do they encompass all the wealth of reality although semantics also deal with the problem of the relation between symbol and meaning. The aspect of content of speech is dealt with neither by the theory of informational languages and code systems, the theory of machine trans- lations nor by the theory of neural networks which makes a significant attempt at combining the methods of mathematical logic with neurophysiology. All that is in- volved here are formal models of logical operations, logical structures and systems of symbols.

The attempt to grasp the aspect of content of speech and its exact expression could in a way proceed from the history of experiments with conceptual writing or ideo- graphy. Experiments carried out so far could not yet proceed from the knowledge of brain physiology but it would be worth while to study them more in detail in order to find out in what way they could still give a stimulus to research today and where they were wrong. It will be recalled that it was Leibniz in particular who, in connection with endeavours for a universal characteristic - a theory of symbols, general combi- national analysis - logical syntax and general encyclopaedia - unified science - made an effort to create a universal language and a universal ideography. There were also Dalgarno, Wilkins, Komensky and others. It is worth while, too, to contemplate in

Page 80: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N A N D T H E B R A I N 71

this connection the old systems of writing which existed prior to the phonetic system, e.g. the original old Chinese writing, Pa-kwa, which derived from the knot writing and preceded the pictorial Ku-wen writing from which the present Chinese writing derives. The symbols of the Pa-kwa writing represented basic concepts, they were based on mythology and simultaneously were expressions of the practical needs of navigation. Natural phenomena - wind, fire, thunder, etc. - were to primitive man allegories of mental processes, the mental picture was a projection of actual reality, the verbal expression was directly connected with its content. The eight basic Pa-kwa symbols were arranged in a wind-rose and represented pairs of opposites such as light-heavy, cold-warm, etc. These concepts were composed of the creative principle yang and the destructive principle yin, which were also expressed in their graphic representation composed of broken and unbroken lines ; this mode of expression is in fact executed in the binary system. Further ideograms were made up by combi- nations of the symbols.

Consciousness, thought and speech reflect objective reality. So far, we know very little about their code, but that is just what should excite our inquisitiveness. The in- formational content of the reflected objective reality is borne by specific signals which are transformed into percepts, hence into brain processes and then into recepts which we are able to decode directly. Only facts of cognition are directly accessible to us and may be observed by introspection. Science, however, will obviously not confine itself to introspection. In order to understand the informational contents of the last link in the chain - consciousness - it is also necessary to know its carrier: brain signals, the brain code. The acts of recording, processing, conserving and transmitting infor- mation in the brain take place in the informational code, of which we know little as yet. The unknown brain writing (which people have in common regardless of their nationality and other differences) manifests itself for instance in spoken and in written language, in the expression and behaviour altogether. In other words, speech and writing are coded manifestations of brain processes, records of processes of man’s higher nervous activity. The speech analyser is closely connected with the motor processes of the vocal organs in speech and of the hand in writing, with the visual processes in reading and the auditory processes in listening. The speech analyser, then, is connected with a number of sensory analysers and by being connected with the outer organs, it is in contact with outer reality, which it assists in reflecting and, after all, in changing too. Physiological, psychical and linguistic processes bearing in- formational content also manifest themselves in a certain physical way, e.g. in an encephalographic record of brain waves or in acoustic waves of speech. The vibrations of the ear drum, in listening to speech, are transformed into nerve processes bearing information to the consciousness, and information of the consciousness may, through efferent nerve processes, be converted into vibrations of the vocal cords. Information of the consciousness may thus manifest itself objectively and become capable of being perceived by the senses.

Psychical processes are not only connected with physiological processes of the brain but also with facts of a physical nature. In order to create a universal language it is therefore necessary to use facts from the field of ( I ) psychophysiology, (2) linguistics,

Page 81: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

72 JIki Z E M A N

(3) the theory of information. The facts in these fields are simultaneously of physical and of mathematical nature. Thus a signal which the theory of information investigates as a bearer of information, is a physical phenomenon, and the information which it bears is described mathematically. A brain process which psychophysiology examines as the bearer of some psychical content, exhibits, among others, physical properties, e.g. electrical phenomena, and the psychical content may be taken as information which can also be expressed mathematically. A linguistic expression which linguistics study as the bearer of a certain content of meaning, has also certain physical proper- ties, e.g. acoustic ones, and its content may also be examined mathematically as in- formation. From this point of view the objects of examination by the theory of in- formation, by psychophysiology and by linguistics may be regarded simultaneously as phenomena of a physical (energetics) and mathematical (informational) nature. Physical signals bearing mathematical information are involved. Every information has a certain characteristic of probability, creates a certain field of probability in the informational space of the source or of the receiver. The signal, then, is a wave phe- nomenon and can be analysed as an informational cell with dimensions of frequency, amplitude and time. To this signal corresponds in psychophysiology a physiological process in the brain, and to the information corresponds the psychical content borne by that process, correlated to it and transformed from it into some fact of conscious- ness. Similarly in linguistics, some informational content is correlated to a spoken expression which consists of physical processes.

We pose the question how to express and depict information of our brain, the unknown brain writing, in such a way that it will be universal and suitable for different fields of science. It is possible to proceed in three different ways: (1) to examine physiological and other expressions of the brain, as is done for example by electro- encephalography; (2) to examine acoustic and other expressions of speech, as is done for example by phonetics; (3) to create physico-mathematical models of the intellectual content of speech.

It must be borne in mind that information of the brain is very complex and in reality encompasses the whole world. A universal language would therefore have to be similarly complex and comprehensive.

B R A I N F I E L D S

The possibility of connecting physiological facts - brain waves - with harmonic analysis was pointed out by Professor Wiener. The complex collective life of the cortex cells is reflected in various frequencies which combine into encephalographic curves. In different individuals and in different mental states, differences in these frequencies may be observed which are registered by the encephalogram. These recorded differ- ences are, however, very small so far, and in principle we cannot judge from them more than that the person on whom the experiment is carried out is asleep, awake, excited, etc. This line of research is therefore largely dependent on further development of neuro-physiology and encephalography. It is, however, true that these manifesta- tions of brain writing - the encephalograms - though of small plasticity as to

Page 82: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N A N D T H E B R A I N 73

content, are not merely conventional signs such as the letters of the alphabet. Un- fortunately, little is known as yet about the connection between form and content.

Lucid expression of transformed brain processes is also possible in phonetics, which are able to record sound waves of speech. A perfection of this method is Visible Speech which, by means of certain equipment, transforms the sound of speech into optical shapes, in other words, transforms temporal phenomena into spatial and graphic ones. Here again, similarly to electro-encephalograms, expressions of brain writing are involved which are not merely utterly conventional signs such as letters, but are direct transformations of brain processes through the medium of the vocal organs. They are, howeber, anything but universal because spoken sounds are cor- related to individual national languages and the blotches of words in Visible Speech are different for example for French and German, etc. Here again, the connection between form and content presents a problem.

In old picture-writing the form is closely connected with the content, e.g. a painted bird: as a signal it directly denotes the given object. This graphic writing based on the world of sense, however, is inadequate when abstraction and generality are re- quired. Thought and speech must be based on the positive side of limiting variety. Expression by means of picture-writing is subject to the senses and outer reality, but not to thought and the transforming human subject. Effective expression by means of brain writing, however, must combine the graphic with the general, the concrete with the abstract.

Expression by means of brain writing must be a graphic expression of non-graphic matter, transformation of temporal phenomena into a spatial record. While our concepts are formed and experience is gained, certain structures of more or less un- known physiological nature, in fact certain brain fields of excitation form in the brain. Stagner and Karwoski, in their book ‘Psychology’, quote the opinion held by several psychologists and neurologists, according to whom outer and internal stimuli perhaps influence the creation of brain fields in the sensorium in a manner similar to a changing magnetic field which causes iron filings to arrange themselves into different figures. Experience, then, would create fields in the brain, with a specific distribution of points of excitation which, being preparatory patterns, would influence our perception and thinking. An analogy with Chladni’s sound figures could also be used: if we cause, e.g. by means of a bow, a brass plate lightly covered by fine sand to vibrate, the sand arranges itself into various figures, in other words, vibrations in time are converted into shapes.

Semon’s and Uchtomski’s theory proceeds from the wave character of the brain activity. Diderot already used an analogy with music in his explanation of psychical life - according to him thought is the sum of perceptions just as sound and melody are composed of a number of vibrations by the string. Semon devised a vibration and resonance theory of psychical processes. According to Semon, perception consists in the transmission of a certain vibration by the nerve tissue; this vibration leaves an imprint in the memory, a trace called engram, which, in the act of recollection, may again be transformed into vibrations under the influence of a similar stimulus - somewhat as a string gives out sound when acted upon by sound waves of the same

Page 83: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

74 J l f i f Z E M A N

frequency. Complex psychical phenomena (association, feelings, thought) originate, according to Semon, through the creation of unisonic formations according to the principles of homophony. It is really some kind of musical theory of psychical pro- cesses which delegates to the memory a function somewhat similar to the function of a gramophone record.

Some concepts of the wave theory are also used by the physiologist Uchtomski. According to him there exist in the brain foci of increased excitation - dominants - which have a low stimulation threshold and are places of easy formation of con- nections. They are centres with a certain rhythm, which resound easily under the effect of stimuli of equal or similar rhythm. The entire spiritual life is governed by these combining brain fields which have different levels of excitation energy.

It can be assumed that the registration of information in the brain occurs on the basis of the formation of excitation brain fields where temporal processes are con- verted and are fixed in a specific shape, i.e. in a specific distribution of points and of degrees of excitation. These fields are apparently linked with wave phenomena of a certain frequency and amplitude, and affect each other according to the principles of the wave theory (it is unlikely that the brain processes should not be connected at all with wave phenomena). However, processes taking place in the brain are confined to the brain itself and cannot be transmitted in a similar manner as, for example, electro- magnetic waves; they can only be converted by efferent impulses into activity of muscles, vocal organs etc. Each brain field is correlated to a certain excitation energy and a specific informational content which could be characterized as a specific semantic spectrum. These semantic spectra of the brain fields can combine and mix in a variety of ways. The centre of the field has a high degree of probability of activation and a high degree of semantic definiteness; towards the periphery the degree of probability of activation decreases, and though the number of points of excitation increases, they become gradually weaker and the periphery is marked by greater semantic indefinite- ness, vagueness, entropy.

H A R M O N I C A N A L Y S I S O F S E M A N T I C I N F O R M A T I O N O F

C O N C E P T S A N D S T A T E M E N T S

Vibrations of the cortex cells, where the processes of thought take place, are coded into vibrations of the vocal organs, and these are transformed into acoustic vibrations. Speech, then, is the external manifestation of thought, a form of physical content, being connected rather with the technical aspect of information, whereas consciousness is connected with the semantic aspect of information. It can be reasonably assumed that brain processes are connected with various resonance, vibration, rhythmical, and wave phenomena, and that they are subject to the principles of combination and propagation of waves and other principles of the wave theory. Just as the excitations of points of the retina combine into excitation of the optic nerve, these excitations into a sensation and the sensations into perception, it can be assumed that in the brain the variety of certain microprocesses is confined and leads to transformation into macroprocesses appertaining to the field of physiology, psychology or behaviourism.

Page 84: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N A N D THE B R A I N 75

Brain vibrations are not transmitted directly, they are transformed into manifestations of speech and behaviour. The brain fields of individuals, therefore, are not formed by some direct transmission, telepathically, but are formed by gradual yet strenuous activity in the process of study, education, spoken communication, and ideological interaction. The basic pattern of the brain is inherited in the shape of hereditary series of unconditioned reflexes, dynamic stereotypes, basic dominant foci, whereas other main scientific and moral concepts of society are acquired in the process of formation of brain fields in human individuals. There is very insufficient direct information available about the character of these fields whch bear semantic information of our concepts and statements. Such information as there is, is received with the aid of encephalograms, analysis of speech or other such expedients.

Let us proceed from the following theses. (I) Harmonic analysis is well developed, it is an accurate and lucid means of expression. (2) Analysis of electro-encephalograms and spoken sounds yields insufficient information on the informational contents of thought. (3) We assume that brain information is connected with wave processes.

From this the following possibility of examining and expressing brain writing ensues: harmonic analysis or other methods of higher mathematics may be used for analysis of and application to the aspect of content of the language, i.e. certain waves may be correlated with certain concepts and statements, and thereby models of brain writing for expressing semantic information and its processing are produced.

Considering that mathematical logics originated from the application of algebra to formal logics, it would certainly seem useful to apply higher mathematics to the content, the semantic aspect of language and thought, while utilizing at the same time the facts made known to us by psychophysiology. Universal, conceptual writing would facilitate more accurate and more comprehensive processing of information by mathe- matical analysis of complex texts, of psychopathological manifestations, it would make possible the association of isomorphic concepts, the creation of new con- cepts, etc.

If brain processes have a wave character, certain periodic functions can be perceived in them which can be resolved by harmonic analysis into Fourier series. We can experimentally correlate certain types of waves to certain contents of thought (cate- gories, concepts, statements). These contents could be taken as different modulations of the basic carrier wave which could be for instance a complex and irregular noise spectrum or, on the contrary, a simple sinusoidal wave. To the function of the carrier wave we might correlate some boundary quantity of information - either a very small one or a very large one, in the semantic sense the category of being, i.e. reality.

Individual concepts and statements are, in the gnoseological sense, reflections of objective reality and they only contain, and even strengthen, some aspects of that reality: they are limiting factors of its variety, of its semantic spectrum (which could correspond to white noise). Every thought-bearing linguistic formation (concept, sentence, text) has, in addition to its formal expression, a certain objective semantic content, is the bearer of some meaning. This meaningful content is never absolutely simple and may be considered a semantic spectrum comprising certain components. Its physiological basis is a certain brain focus connected with the complex neural

Page 85: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

76 J l k i Z E M A N

network. The semantic components of this formation may be considered the harmonic components of that spectrum. There are formations which have similar spectra; this similarity may be only superficial, formal (e.g. acoustic, homonymic), or it may be based on the content (semantic, synonymic). For instance the words ‘moon’ and ‘ h e ’ have very similar semantic spectra. In sentences words are arranged on the basis of some connection. A haphazard collection of words has only the outward appearance of a sentence but there is no sense in it, words of syntactically dissonant semantic spectra are put together there. Semantic spectra can be variously mixed; thus for instance hybrid words originate, new formations, etc. (e .g . the semantically ambiguous, vague formations made up by the poet Morgenstern). Every linguistic formation is only a core, set in the complex semantic field of association and has a certain degree of indefiniteness, entropy. Every component of a sentence has a certain semantic scatter, indefiniteness. Missing words in a sentence or missing sentences in a text may be complemented in various ways.

In processes of thought, no matter whether expressed in speech or not, certain brain foci are connected on the basis of some correlation or similarity; these foci have specific semantic spectra with a certain characteristic of probability, and if they are strengthened by some stimulus, they transmit their waves into their vicinity (here Huygens’ principle of wave propagation probably applies), and semantic spectra combine in various ways. These processes have their nuances in various types of people; the poetic type, for instance, has more ambiguous, less definite and well- defined semantic spectra than the scientific type.

Similar concepts have apparently spectra of a similar nature, their waves exhibit only minute differences as to phase, amplitude, or frequency. Concepts rich in as- sociations, ambiguous, indefinite (but not informationally empty) concepts apparently show a certain similarity to complex noise spectra. On the other hand, general, vague, informationally almost empty concepts have simple spectra (not composite, regular). The above-mentioned concept of being might have, in view of its high degree of generality, a very simple spectrum ; if, however, it is understood to comprise virtually everything, then its spectrum would be of maximum complexity.

Let us take this simple model of a dictionary: We have a general term X which comprises the concepts A , B, C, . . .; concept A comprises the less general concepts k, I, m, . . .; concept k comprises the concepts r, s, t , . . ., etc. And yet concept r is richer in content than concept k which comprises it. A similar relation exists between k and A and A and X . For example,Xcould mean ‘intellectual’, A artist, k graphicartist, r painter; also statements could be involved, as X - ‘Mr P. is an intellectual’, etc. The given concepts could be depicted as waves which represent periodic functions that can be resolved into Fourier’s harmonic series, where A40 is the d.c. component, M the amplitude, (IJ the angular frequency, t time, 9 phase, T cycle. w equals 2n/T. Individual wave lines denoting concepts would represent the following functions :

A’: f (XI) = MO -t MI sin (wt + pl)

A : f ( x 2 ) -= Mo 1 MI sin ( o t 1 q1) i MZ sin (2wf I pz)

k : f (x3) ~ MIJ + MI sin (or + 91) + Mz sin (2wt + pz) + Mssin (3wt i p3)

Page 86: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N A N D T H E B R A I N 77

r : f (x4) = MO I M I sin (wt + pl) + MZ sin (2wt + p) + M3 sin (3wt + ps) + z

1- Mq sin (4ut + 94) = MO + Z M4 sin (4wt i pi).

In reality, of course, the functions would be much more complex because the system of words is much more complicated than in the simple model described above.

On the basis of a close connection between brain physiology, mathematics, and linguistics it would be possible to create by this method mathematical models of brain writing which would gradually enable us to represent better and more uniformly the objectbe world including its subjective reflection in man’s mind. This, of course, is no easy task. To ascertain the fundamental spectra of categories, concepts, arld speech altogether, would require vast and combined efforts by linguists, mathematicians, physiologists, psychologists, physicists, philosophers, etc. In spite of that it seems to us that here could be found a wa,y to create a universal, general language and writing on a strictly scientific basis. Such a language could be the foundation of unification of sciences, complex processing of scientific information etc., and would open up un- thought-of possibilities of further development of human knowledge.

S U M M A R Y

The endeavours to work out a universal scientific language should not neglect the aspects of the coptents of a language and take up the history of symbols in writing (the ancient Chirese Pu-Kvi~z writing, Leibnitz, etc.). Inforinational machine language translations, the theory of neural networks and mathematic logic primarily stress the form rather than the contents of a language. The process of registration, elaboration and transmission of information in the brain is carried out in an informational code, as yet unknown to us. The unknown brain writing is manifested for example in the spoken and written word and in behaviour in general. The language and writing are coded expressions of the activity of the brain. Physiological, psychic and linguistic processes, which bear an informational content manifest themselves in a certain physical form, e.g. in the encephalographic registration of brain waves or in acoustic waves of the spoken word.

Wiener points out the possibility of a harmonic analysis of brain waves. In a harmonic analysis of the brain writing, the idea of Semon’s vibration and resonance theory of the activity of the brain and Uchtomski’s theory of dominant brain foci and fields of excitation energy could be applied. The registration of information in the brain writing creates brain fields where temporary phenomena are transformed into symbols, similarly as in the pictures of visible speech or in Chladni’s sound images. Every brain field is connected with a certain excitation energy and a certain informational content which manifests itself as a specific semantic spectrum. These spectri can be joined and mixed together in various forms.

It can quite logically be assumed that the brain processes are connected with the resonance, vibration, rhythmical and wave phenomena and the principles ofcompcsing and spreading of waves, as well as other principles of the theory of wave action can be applied here.

Page 87: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

78 JIki Z E M A N

In the brain activity, certain micro processes change into macrophenomena of a physiological and psychic nature and behaviourism.

The brain vibrations cannot be carried over directly, but they can be transferred in a simplified form to the encephalogram or transformed into the spoken word. Through language, education and ideological activity, information of the main, socially acknowledged, scientific and moral concepts can be transmitted and form brain fields of individua. We know very little as yet about the brain writing, since both the en- cephalogram and the harmonic analysis of spoken sounds inform us only insufficiently about the semantic information of concepts and statements. If we assume though, that the brain information is connected with the wave action, we can use another method and apply the harmonic analysis directly to the subject side of the language, i.e. attaching certain waves to certain concepts and statements. The semantic infor- mation of concepts and statements which can be comprehended as a modulation of some supporting wave, can be expressed by Fourier’s series. Thus we can arrive at artificial mathematic models of brain writing and prepare the connection of the brain physiology with mathematics and linguistics.

Page 88: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

79

Informationspsychologie und Nachrichtentechnik

H E L M A R F R A N K

lnstitur fur Nachrichtenverarbeitung und Nachrichienuberiragung der Technischen Hochschule Karlsruhe, Karlsruhe (Deutschland)

I N H A L T

1 . Der Begriff der Informationspsychologie . . . . . . . . . . . . . . . . . . . . . . . 2. Beriihrungspunkte der Informationspsychologie mit der Nachrichtentechnik . . . . . . .

2.1. Modelle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. Praktische Aufgabe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3. Philosophische Deutung . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3. Konkrete Beispiele . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Modelle der Informationspsychologie . . . . . . . . . . . . . . . . . . . . . .

3.1.1. BegrifRiche Modelle, 82 - 3.1.2. Anschauliche Modelle, 85 - 3.1.3. Technische Modelle, 86 - 3.1.4. Der Wert von Modellen, 88

3.2. Praktische Aufgaben . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3. Ein philosophischer Beruhrungspunkt zwischen Informationspsychologie und Nach-

richtentechnik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literatur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19 80 80 81 82 82 82

89

94 96

1. D E R B E G R I F F D E R I N F O R M A T I O N S P S Y C H O L O G I E

Die Kategorien der Materie und der (physikalischen) Energie sind fur die Psychologie, auch fur die an den Naturwissenschaften orientierte, experimentelle Psychologie, prak- tisch nie von Wichtigkeit gewesen. Man konnte darin ein Argument sehen, die Psy- chologie den sogenannten ‘Geisteswissenschaften’ zuzurechnen. Jedoch is der Gegen- satz zwischen Naturwissenschaften und Geisteswissenschaften inzwischen durch die Kybernetik in Frage gestellt worden, da es einerseits Kennzeichen der Kybernetik ist, nicht materielle und energetische sondern vielmehr informationelle Verhaltnisse zu untersuchen, was seit jeher ein geistes,Yissenschaftliches Anliegen war, wahrend an- dererseits die Herkunft und das bisherige Hauptbetatigungsfeld der Kybernetik in technischen und biologischen Problemkreisen, also nicht bei den Geisteswissenschaften zu suchen ist. Dass inzwischen auch in traditionell geisteswissenschaftliche Forschungs- zweige, z.B. in das Gebiet d e r k h e t i k (Moles, 1958; Bense, 1956/60; Frank, 1959a, b; Goubeau, 1960; u.a.) kybernetische Begriffe und Fragestellungen eindringen, ist ein weiterer Beleg dafur, dass durch die Kybernetik gegenwartig die zumindest in Deutsch- land bisher stark empfundene Kluft zwischen Natur- und Geisteswissenschaften iiber- wunden wird. Fur die Psychologie bedeutet dies eine Gelegenheit zur Neubesinnung iiber ihre Stellung im System der Wissenschaften.

Insbesondere kann die Psychologie versuchen, sich soweit wie moglich der von der Literafur S. 96

Page 89: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

80 H E L M A R F R A N K

Kybernetik entwickelten Terminologie zu bedienen, also psychische Phanomene in Begriffen wie Information, Codierung, Kanal, Ruckkoppelung, Regelung, usf. zu be- schreiben. Dies ist eine Moglichkeit, keine Notwendigkeit. Nennen wir eine For- schungsrichtung, welche \ on der angedeuteten Moglichkeit Gebrauch macht, ‘Infor- mationspsychologie’, dann soll damit von vorneherein zugegeben sein, dass auch andere Darstellungen und Deutungen derselben psychologischen Sachverhalte durch andere psychologische Riclitungen moglich und berechtigt sind. (Es kann j a auch in der Mathematik derselbe Sachverhalt entweder rein geometrisch oder aber analytisch- geometrisch dargestellt und begrundet werden, ohne dass deshalb eine der beiden Begrundungen ‘falsch’ ware. Ebensowenig ‘falsch’ ist in der Physik die Elastizitats- lehre; zwar gibt es streng genommen keinen elastischen Korper, wohl aber weite Bereiche, in denen eine elastomechanische Beschreibung besonders einfach und fur Voraussagen genugend geneu ist.) Selbstterstandlich hat die Verwendung kyberneti- scher Begriffe in der Psychologie nur Sinn, M enn dadurch eine einfachere Darstellung der beobachteten Erscheinungen oder eine einfachere Zuruckfuhrung einer Erscheinung auf eine andere moglich ist. Dass dies tatsachlich der Fall ist, die Informationspsycho- logie also eine Existenzberechtigung hat, kann durch zahlreiche Einzelresultate belegt werden, von denen hier einige wenige erwahnt werden sollen. (Andere Belege finden sich in den zusammenfassenden Darstellungen bei Attneave, 1959; Luce, 1960; Rohr- acher, 1961 ; Frank, 1962b.)

Die lnforlrationspsychologie macht, wie wahrscheinlich jede theoretische Disziplin, Vernachlassigungen. (Die Elaston-echanik vernachlassigt beispielsweise die Existcnz von Grenzen des Hookeschen Gesetzes.) Sie macht sich also ein Modell vom psychi- schen Gesckehen. (Ein Modell ist ein homoomorphes Bild eines Gegenstandes, wobci hcmoomorph heissen soll, dass die Abbildung zwar nicht vieldeutig ist, aber nicht alle Unterscheidungen, die vorgenommen werden konnten, im Modell erfasst sind. In diesem Sinne kann man beispielsweise das System der rationalen Zahlen als Modell des Systems der Bruche auffassen, wobei die Unterscheidung zwischen teilerfremden und nicht teilerfremden Bruchen nicht auf das Modell abgebildet wird.) Beschrankt sich die begriffliche Entwicklung des informationspsychologischen Modells auf jene kybernetischen Termini, welche in der Nachrichtentechnik verwendet werden, dann ergibt sich die Moglichkeit, informationspsychologische Theorien durch Blockschalt- bilder oder Flussdiagramme ZLI veranschaulichen oder gar durch nachrichtentechnische Madelle zu realisieren. Damit ergeben sich prinzipielle Beruhrungspunkte zur Nach- richtentechnik (Frank, 1961b).

2. B E R ~ H R U N G S P U N K T E D E R I N F O R M A T I O N S P S Y C H O L O G I E M I T D E R

N A C H R I C H T E N T E C H N I K

Die Beruhrungspunkte der lnformationspsychologie mit der Nachrichtentechnik er- geben sich in dreierlei Hinsicht: 1. durch die Modellbildung; 11. in der praktischen Aufgabe; 111. bei der philosophischen Deutung. 2.1. Morlelle im Sinne vereinfachender Bilder des ursprunglichen Forschungsgegenstan- des konnen sein (vgl. Frank, 1962a):

Page 90: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S P S Y C H O L O G I E U N D N A C H R I C H T E N T E C H N I K 81

2. I . 1. begriffliche Modelle : Worte; 2.1.2. anschauliche Modelle : Bauplane, Schaltbilder, Felix Kleins Model1 der Nicht-

2.1.3. sinnlich wahrnehmbare (technische) Modelle : Flugkorpermodelle im Wind-

Dass auch Worte zu den Modellen gezahlt werden, mag iiberraschen. Aber schon

‘Da nun, wie gesagt, die zu abstrakten Begriffen sublimirten und dabei zersetzten Vorstellungen alle Anschaulichkeit eingebiisst haben; so wiirden sie dem Bewusst- sein ganz entschliipfen und ihm zu den damit beabsichtigten Denkoperationen gar nicht stand halten ; wenn sie nicht durch willkiihrliche Zeichen sinnlich fixiert und festgehalten wiirden : dies sind die Worte.’ ( 5 26). ‘Eben dadurch, dass Begriffe weniger in sich enthalten, als die Vorstellungen daraus sie abstrahirt worden, sind sie leichter zu handhaben, als diese, und verhalten sich zu ihnen ungefahr wie die Formeln in der hoheren Arithmetik zu den Denkoperatio- nen, aus denen solche hervorgegangen sind und die sie vertreten . . . Sie enthalten von den vielen Vorstellungen, aus denen sie abgezogen sind, gerade nur den Theil, den man eben braucht; statt dass, wenn man jene Vorstellungen selbst, durch die Phantasie, vergegenwartigen wollte, man gleichsam eine Last von Unwesentlichem mitschleppen miisste und dadurch verwirrt wiirde : jetzt aber, durch Anwendung von Begriffen, denkt man nur die Theile und Beziehungen aller dieser Vorstellungen, die der jedesmalige Zweck erfordert. Ihr Gebrauch ist demnach dem Abwerfen unniitzen Gepackes . . . zu vergleichen.’ (§ 27). Was Schopenhauer hier iiber die Begriffe oder, streng genomnien (Zitat aus 5 26!),

iiber die Worte sagt, ist nichts anderes als eine klare Darlegung der heuristischen Funktion aller Modelle; das Denken Schopenhauers reicht damit an einen Angel- punkt kybernetischen Denkens heran ! (Frank, 1962~). 2.2. Die praktische Aufgabe der Tnformationspsychologie kann sein : 2.2.1. die empirische oder womoglich deduktive Ermittlung von Zahlenwerten, welche

die Fahigkeit der Nachrichtenverarbeitung des Menschen kennzeichnen und der Planung solcher Arbeitsablaufe dienen konnen, an denen Mensch und Maschine beteiligt sind;

2.2.2. das Studium des Verhaltens des Einzelmenschen wie auch des menschlichen Gruppenverhaltens mit dem Nahziel, dieses mittels digitaler oder auch analoger Rechenautomaten zu simulieren (Hovland, Haseloff u.a.) und dem Fernziel, die Ersetzung des Menschen durch Automaten zu ermoglichen. (Ein konkretes Beispiel aus dem Gebiet des ‘Problemlosens’ geben Newel1 und Simon, 1961.) Selbstverstandlich wurden solche Probleme im Zusammenhang mit nachrichten-

theoretischen Untersuchungen schon lange bearbeitet, ohne dass dabei von ‘Informa- tionspsychologie’ gesprochen worden ware, wie ja die Kybernetik ganz allgemein nicht in erster Linie neue Forschungsgebiete erschliesst sondern zunachst einmal schon in Fluss befindliche Untersuchungen durch einheitliche Begriffsbildungen und iibergeord- nete Gesichtspunkte koordiniert (Wiener, 1948, S.23). Vielleicht ist jedoch der Hin- weis nicht iiberfliissig, dass das psychologische Jnteresse des Nachrichtentechnikers

Euklidischen Geometrie, usf. ;

kanal, Modelle des bedingten Reflexes, usf.

Schopenhauer ( I 847) sah hier sehr klar :

Litcratur S. 96

Page 91: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

82 H E L M A R F R A N K

nicht durch irgend eine beliebige psychologische Richtung befriedigt werden kann ; beispielsweise nutzt dem Nachrichtentechniker die geisteswissenschaftlich orientierte ‘verstehende Psychologie’, die im Ausland vielfach ‘als eine typisch deutsche Erschei- nung’ gilt (Hofstatter, 1957, S.315), relativ wenig, da er meist quantitative Aussagen benotigt. 2.3. Eine philosophische Deutung der ‘Nachricht’ liegt mehr oder weniger offen jeder Darstellung von technischen Systemen zugrunde, die Nachrichten nicht nur verarbei- ten sondern auch iibertragen, ebenso wie vielen Rechtfertigungen fur die Einfuhrung eines Informationsmasses. Es scheint, dass dabei die Bewusstseinsthematik - und sei es nur als Modellvorstellung - unausweichlich ist (Gunther, 1957; Steinbuch, 1962; Frank, 1962a). Das Bewusstsein ist aber ein Thema jeder nicht radikal behaviouristi- schen Psychologie (Rohracher, 1960, S.65; Hofstatter, 1957, S.62 f.).

Diese Beruhrungspunkte sollen im Abschnitt 3 durch einige konkrete Beispiele belegt werden.

3. K O N K R E T E B E I S P I E L E

3.1. Modelle der Informationspsychologie

3.1.1. Begrifliche Modelle Mehrere Fachausdrucke der Nachrichtentechnik eignen sich sehr gut fur eine kom-

primierte Beschreibung psychologischer Phanomene, die naturgemass Vernachlassi- gungen macht. Als Beispiele mogen die Begriffe Kanal, Information und Codierung dienen.

Unter einem Kanal (in einem verallgemeinerten Sinne; vgl. Frank, 1961a) wird eine Vorrichtung zur Ubertragung einer Nachricht von einem Punkt der vierdimensionalen Raumzeit zu einem anderen verstanden, wobei die Nachricht im Kanal systematisch oder zufallig verandert werden kann. Speziell liegt ein ‘Raumkanal’ vor, wenn die Zeitdifferenz zwischen Eingabe und Ausgabe unwesentlich ist (z.B. Telefon), ein ‘Zeitkanal’, wenn der raumliche Transport unwesentlich ist (z.B. Denkmal), ein ‘Raum- zeitkanal’ in jedem anderen Falle (z.B. Zeitschrift). Nennt man die erste noch nicht mit Bewusstsein verbundene Phase der Wahrnehmung, bei welcher die Aussenweltreize durch die Sinnesorgane verschliisselt, den Nervenzentren ubertragen, dort verarbeitet und schliesslich der wandernden Aufmerksamkeit (der ‘Apperzeption’) angeboten werden, Perzeption (Steinbuch und Frank, 1961), dann spielt sich der Vorgang der Perzeption in einem Raumkanal ab, der wahrscheinlich von den Sinnesorganen bis zu den sensorischen Projektionszentren zu rechnen ist. Psychologisch wichtiger sind zwei ‘Zeitkanale’ des Menschen : der Kurzspeicher und das vorbewusste Gedachtnis. Das Wort Kurzspeicher sol1 dabei die Instanz kennzeichnen, durch welche die nach- einander vergegenwartigten Bewusstseinsinhalte unter Wahrung ihrer Reihenfolge etwa 10 sec (‘Gegenwartsdauer’) bewusst bleiben. Das vorbewwste Gedachtnis dagegen bezeichne jenen Zeitkanal, dessen Inhalt uns zwar nicht ununterbrochen bewusst ist, den wir uns aber durch geeignete Schliisselinformationen assoziativ bewusst machen, d.h. in den Kurzspeicher rufen konnen. (Diese Zeitkanale mussen nicht notwendig

Page 92: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

INFORMATIONSPSYCHOLOGIE U N D N A C H R I C H T E N T E C H N I K 83

mit bestimmten Neuronennetzen identifizierbar sein; vgl. die ganz andere Interpreta- tion bei Steinbuch, 1962. Der Kurzspeicher ist zunachst nur ein begriffliches Modell, das zur Vereinfachung der Darstellung dient, vergleichbar vielleicht der komplexen Schreibweise von Spannungen oder Stromstarken.)

Man kann durch tachistoskopische Methoden bestimmen, wie umfangreich eine Nachricht hochstens ist, die in einer bestimmten, kurzen Zeit gerade noch apperzipiert (bewusst) werden kann. Die Durchfuhrbarkeit dieses Versuchs gewahrleistet der Kurzspeicher, welcher die apperzipierten Nachrichtenteile einige Sekunden wahrt, so dass sie dem Bewusstsein der Versuchsperson nicht schon entschwinden, bevor diese sie wiedergeben kann.

A lo01 Prozentsatz der

w(1 hroe nommene n Zeichen

80-

60 - N U - - 3 5 40- 4 2 -

20-

O J I

0 '4s' dl 02 03 0 4 6 5 ( S e c l Exposltlonszert

Fig. 1. Die Ergebnisse von Miller, Bruner und Postman in linearen Koordinatensystemen. Von denselben Testserien sind in (a) die Prozentsatze der wahrgenornmenen Zeichen, in (b) die aufgenorn- menen Informationsbetrage eingetragen. Extrapolation des linearen Verlaufs uter 1/16 sec hinaus ergibt, dass wenigstens 12 bit durch Nachwirkungserscheinungen aufgenommen wurden, die Zufluss-

kapazitat also bei 16 bit/sec liegt.

Liferatur S. 96

Page 93: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

84 H E L M A R F R A N K

Fig. 1 zeigt, dass der Zusammenhang zwischen apperzipierter Information und tachistoskopischer Expcsitionszeit linear ist, jedenfalls fur Expositionszeiten, die Ianger sind als etwa 1/16 sec, einer psychologischen Zeiteinheit, die als ‘subjektives Zeitquant’ (SZQ) oder ‘Moment’ seit einem Jahrhundert gelaufig ist. (Der steilere Kurvenanstieg bei extrem kurzen Zeiten deutet darauf hin, dass die Apperzeptionszeit etwas Ianger ist als die Expositionszeit, dass also auch untergeordnete Instanzen Zeit- kanale enthalten.) Der Versuch (Miller et al., 1954) ergab ferner, dass die in einer gegebenen Zeitspanne apperzipierbare Nachricht desto unifangreicher (gemessen in apperzipierten Zeichen) sein kann, je redundanter die Nachricht ist (Fig. 1 A), dass aber mit guter Naherung gesagt werden kann, dass urzahhangig von der Redundanz in einer gegebenen Zeitspanne eine bestimmte Injormationsvzenge apperzipierbar ist (Fig. 1 B). Der nachrichtentechnische Begriff der Information eignet sich daher auch in der Psychologie als Mass fur den Umfang einer Nachricht. Ubrigens kann der Steigung der Kurve in Fig. 1 B entnommen werden, dass die Zuflusskapazitat zum Kurzspeicher etwa 16 bit/sec, also 1 bitjSZQ betragt (Frank, 1959a, 5 3.6), eine Grossenordnung, die gut mit den Ergebnissen anderer Experimente iibereinstimmt. Entsprechend kann aus verschiedenen Experimenten iibereinstimmeiid eine Zuflusskapazitat von etwa 0.7 bit/sec zum vorbewussten Gedachtnis berechnet werden (Von Cube, 1960; Frank, 1962b).

Eine Codierung wird von einer Versuchsperson beispielsweise in Experimenten wie dem von Merkel ( I 885) und Hyman (1953) gefordert. Denn Hyman liess die Versuchs- person kraft eines Inereinbarten Codes zu aufleuchtenden Lainpchen vorher erlernte sinnlose Silben zuordnen. Die durchschnittliche Reaktionszeit bei den Versuchen erwies sich als lineare Funktion der mittleren Information der Lichtsignale wahrend dieser Versuche (Fig. 2). Die Steigung dieser Geraden entspricht mit grosser Genauig-

07

tfyman 1053

Information 0 1

0 1 2 3 4 f b f f l

1 2 3 4 5 6 7 8 9 1 0 Reperfoire- Mbchfigkeft

Fig. 2. Semantische lnfcrmation und Reaktionszeit. Vereinfachte Darstellung der Ergebnisse von Merkel und der besten Ergebnisse (kurreste Reaktionszeit) von Hyman. Durch erhohte Ubung kann

die Steigung der resultierenden Geraden bis zu einer theoretischen Crenze verringert werden.

keit einer Apperzeptionsgeschwindigkeit von nur 8 bit/sec. Ausser den Lichtzeichen mussten sich die Versuchspersonen hier jedoch auch noch die dazu gelernten Silben mit gleichgrossem lnformationsgehalt vergegenwartigen. Zu der perzipierten ‘syntak-

Page 94: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S P S Y C H O L O G I E U N D N A C H R I C H T E N T E C H N I K 85

tischen’ Information der Lichtzeichen, namlich 8 bit/sec, kam also ein gleichgrosser Betrag an memorierter ‘semantischer Information’ durch die Codierung hinzu, was wieder auf I6 bit/sec fuhrt (Frank, 1962b, S . 124 f.).

3.1.2. Anschauliche Modelle Die nachrichtentechnisch inspirierte Terminologie, zu welcher im Abschnitt 3.1.1

einige Ansatze gegeben wurden, legt natiirlich eine Veranschaulichung nahe, die dem Blockschaltbild oder dem Flussdiagramm des Nachrichtentechnikers und Program- meurs von Rechenautomaten entspricht. Fig. 3 zeigt (nach Frank, 1961a; 1962b)

Zuwendung

f,.Aufrnerk- , sornkefl”J 1

I (Aussenwe l f ) t lo“% REIZUNG AKTION

Fig. 3. Organogramm fur den Informationsfluss im Menschen.

den Entwurf eines vereinfachenden Organogramms fur den Informationsfluss im Menschen. Das Wort ‘Organogramm’ sol1 dabei in der Schwebe lassen, inwieweit hier Verarbeitungszentren (etwa Sinnesorgane und Projektionszentren) miteinander verkniipft werden, entsprechend einem Blockschaltbild, und inwieweit einfach die Aufeinanderfolge von Prozessen dargestellt wird, entsprechend einem Flussdiagramm. Eine starker zum Blockschaltbild tendierende, einfachere Darstellung des Informa- tionsflusses im Menschen gibt Steinbuch (1961b, S.188) und schliesst daran eine Typisierung psychischer Funktionen an (1 962). Literatur S. 96

Page 95: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

86 H E L M A R F R A N K

3.1.3. Technische Modelle Der Schritt vom anschaulichen zum technischen Modell ist gedanklich naheliegend,

so schwierig er auch technologisch meist ist. Man kann fur eine erste, grosse Naherung daran denken, den Kurzspeicher durch ein lineares, 160-stufiges Schieberegister dar- zustellen, wobei die Inhalte aller 160 Einzelzellen (jeweils I bit) in jedem subjektiven Zeitquant um eine Position weiterriicken. Damit kommt erstens die zeitliche Ordnung der Bewusstseinsinhalte, zweitens ihr Bewusstbleiben wahrend der Gegenwartsdauer (etwa 10 sec) und drittens die Apperzeptionsgeschwindigkeit von 16 bit/sec im techni- schen Modell zum Ausdruck. Eine codierungstheoretische Betrachtung zeigt aber, dass dabei die Entschliisselbarkeit verloren geht, sobald das Anfangsstiick der Binarkette fehlt, durch welche ein Bewusstseinsinhalt im Schieberegister reprasentiert ware. Das Modell miisste daher erweitert werden, z.B. durch ein parallelgeschaltetes zweites Schieberegister, das die Anfange der Binarketten der Einzelzeichen kennzeichnet.

Ein technisches Modell des vorbewussten Gedachtnisses muss vor allem beriick- sichtigen, dass wir uns Gedachtnisinhalte durch Assoziationen, nicht etwa durch Ab- fragen von Speicherinhalten mittels Adressen, vergegenwartigen. Als Baustein fur ein Gedachtnismodell diirfte die Lernmatrix von Steinbuch (1961a,b) geeignet sein. (Vgl. dazu auch den Vortrag von P. Miiller.)

Man kann die Funktion einer Lernmatrix als technische Realisierung der Ausbildung eines Systems bedingter Reflexe auffassen. Fig. 4 zeigt das Prinzipielle ohne Beriick-

A

(Olockenreizl sb v-.---.

(Geruchsreizl s, .- R (Speichelflussl ‘N.

0

[b! ] FF::) R ,,sekretorische Ste uersignale“

olfaktorische Merkmale *

Fig. 4. (a) Schematische Darstellung der Bildung eines bedingten Reflexes (Beispiel : Pawlows Hunde- versuch).

(b) Dementsprechende Anordnung von Steinbuchschen Lernmatrizen; die gezeichnete Anordnung kann bis zu acht bedingte Reflexe ausbilden.

sichtigung der nachrichtentechnischen Einzelheiten. Der Geruchsreiz des bekannten Pawlow-Versuchs moge durch eine Folge ‘olfaktorischer Merkmale’ el, e2 usf. gekenn- zeichnet sein, die entweder anwesend oder abwesend sein mogen. Sei beispielsweise el abwesend, e2 und e3 anwesend, dann wird in Fig. 4 der untere el-Eingang, der obere ez-Eingang und der obere eS-Eingang der Matrix e A b angesteuert. uber den Ausgang b2 (der am meisten durch Dreiecke markierte Verkniipfungen mit angesteuerten Ein-

Page 96: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S P S Y C H O L O G I E U N D N A C H R I C H T E N T E C H N I K 87

gangen hat) wird dieser Reizkombination durch die Matrix b R e ein System binarer Ausgangssignale (die zum Beispiel als Erregungen oder Hemmungen efferenter Neu- ronen interpretiert werden konnen) zugeordnet ; im Beispiel werden die Ausgange el und e3 von b R e negativ, e2 positiv markiert. Der ‘Dipol’ eAbbRe ist ein unveranderlicher Zuordner, der ein System unbedingter Reize Sa in ein System unbedingter Reaktionen R zu codieren gestattet. Wenn aber gleichzeitig e A b durch ‘olfaktorische’, eBb durch ‘akustische’ Merkmale erregt werden, dann bilden sich dementsprechend in eBb auto- matisch die Verknupfungen mit derjenigen b-Leitung, welche gerade die unbedingte Reaktion auslost, so dass diese spater als bedingte Reaktion auch auf den akustischen Reiz erfolgt.

Es liegt nahe, das vorbewusste Gedachtnis durch das begriffliche Modell des be- dingten Reflexes zu beschreiben, so dass ein Lernmatrizensystem sich als technisches Modell anbietet. Es sei kurz darauf hingewiesen, dass Bok (1961) auf Grund der Beobachtung zentralnervoser Strukturen eine Gedachtnishypothese aufgestellt hat, und ihre Analogien und Unterschiede bezuglich Steinbuchs Lernmatrix untersucht.

BEOBACHTUNG OBER TRA G UNG WIEDERGP BE

A E I U

Fig. 5. Binare Codierung und Decodierung durch Lernmatrizen und Schieberegister. Die Wiedergabe (z.B. Druck) der vier Zeichen benotigt Zeiten, die zur Information der Zeichen (vgl. Codebaum)

proportional sind. Das Modell lasst sich noch vereinfachen.

Die Versuche, welche die psychologische Relevanz des mathematischen Informa- tionsmasses begrunden, legen es nahe, vor dem Kurzspeicher eine Instanz anzunehmen, welche die Aussenweltreize (z.B. erkannte Schriftzeichen) so in den Kurzspeicher ein- liefert, dass die benotigte Einlieferungszeit etwa proportional zur Information dieser Reize ist. Dies kann man sich insbesondere durch eine mehr oder weniger optimale Literatur S. 96

Page 97: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

88 H E L M A R F R A N K

Binarcodierung der perzipierten Objekte vorstellen. Fig. 5 zeigt eine Moglichkeit, wie durch ein System von Lernmatrizen und Schieberegistern tatsachlich eine informations- proportionale Reaktionszeit dargestellt werden kann (Frank, 1961b).

3.1.4. Der Wert von Modellen Modelle konnen im allgemeinen einen didaktischen, heuristischen, technischen oder

philosophischen Wert haben. Der didaktische Wert anschaulicher und technischer Modelle besteht darin, dass sich Sichtbares oder zumindest Vorstellbares leichter einpragt als abstrakte Zusammenhange. Der heuristische Wert aller Modelle besteht in der Reduktion komplexer Sachverhalte auf eine im Verhaltnis zum Fassungsvermo- gen des Kurzspeichers nicht zu grosse Informationsmenge. Wo eine solchermassen hinreichende Reduktion nicht moglich ist, aber alle Grundsachverhalte axiomatisch erfasst sind, konnen technische Modelle als Simulatoren die erwiinschten, rechnerisch schwierig zu findenden Resultate liefern. Auf den technischen Wert kommen wir im Abschnitt 3.2 zu sprechen.

Als philosophischen Wert der nachrichtentechnischen Modelle kann man mit Moles (1958, S.202) den Existenzbeweis fur die physikalische Realisierbarkeit psychischer Funktionen (zumindest soweit diese behaviouristisch betrachtet werden) ansehen.

Der heuristische Wert der oben umrissenen Modellvorstellungen der Informations- psychologie moge an einem Beispiel demonstriert werden. Verschiedene Experimental-

S

%

30

20

10

0

Arnoult ( 1956 ) o - Theorie

x - Meflwerf I geschatzte Hau fighe i t Noble

(1954 I

Affneave \ I

1 -Silber

4 -Silber I 1 I I I I I I ! 6-Sdber

7- Silber I

I ~ s i / b i g 2-silbig

1 - silbig 7-silbig

I I

1 I I i h b 1

0 i0 20 30 10 sb %

w h r e Haufigkeit

Fig. 6. Die geschatzten Haufigkeiten als Funktion der wahren Haufigkeiten. Im Beispiel (Kreuze und Optimalcodebaum) sollte die Haufigkeit einsilbiger, zweisilbiger usf. Worte eines zuvor unter Skan-

dierung der Silben gelesenen deutschen Textes (Musil) geschatzt werden.

Page 98: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S P S Y C H O L O G I E U N D N A C H R I C H T E N T E C H N I K 89

psychologen (Attneave, Noble, Arnoult) liessen ihre Versuchspersonen die relativen Haufigkeiten von Zeichen (z.B. Buchstaben) zahlenmassig schatzen oder in angegebene Haufigkeitsstufen einordnen. Die Schatzwerte SS erwiesen sich nicht als proportional zu den wahren Haufigkeiten hi der Zeichen ZS, vielmehr bestand eine eindeutige Tendenz, die Haufigkeit der seltenen Zeichen zu iiberschatzen und jene der haufigsten zu unterschatzen. Unser Modell legt dafur folgende Deutung nahe:

Ein Zeichen Zt ist innerhalb einer apperzipierten Zeichenfolge desto auffalliger je mehr Zeit im Verlauf aller seiner Wiederholungen insgesamt fur seine Apperzeption verwendet wurde. Diese Gesamtzeit ist proportional : ( I ) zur Haufigkeit ht des Zeichens ; (2) zur Information Ii des Zeichens; so dass also die Auffalligkeit at ware:

at = const. hi. It = const. hi . Id l/hg

Die Konstante lasst sich aus der Bedingung berechnen, dass die Summe aller Auffalligkeiten 100% sein soll. Tatsachlich entspricht bis zu etwa hi = 37% die Auffalligkeitsfunktion (Kurve Fig. 6) gut den empirisch gefundenen Schatzwertkurven, so dass angenommen werden darf, dass der Mensch solange er nicht im Haufigkeits- schatzen geubt ist, beim Schatzen den Auffalligkeitswert angibt. Das wurde aber heissen, dass seine Schatzungen jenseits von 37 % auch qualitativ falsch wurden: das haufigste Zeichen musste als seltener als das nachsthaufige empfunden werden. In den genannten Versuchen trat eine so hohe Haufigkeit nicht auf (Fig. 6). Eigene Versuche des Verfassers bewiesen jedoch diesen ‘Maximumeffekt’ (Kreuze in Fig. 6), der aller- dings mit wachsender ubung, wachsendem Alter und auch mit hoherer Intelligenz abnimmt und bei vielen Versuchsanordnungen ganz verschwindet.

Geht man von dem vorgeschlagenen, groben nachrichtentechnischen Modell fur den Kurzspeicher aus, namlich von einem Schieberegister mit vorgeschalteter Optimal- codiervorrichtung, dann ware in der obigen Formel fur Zt statt des Informationswerts Id l/hi die Zahl der zur Optimalcodierung erforderlichen Bits (vgl. Codebaum in Fig. 6) einzusetzen. Damit erhalt man als theoretische Vorhersage fur die Schatzwerte statt der Kurve die in Fig. 6 durch Kreise markierten Punkte, also einen ausgepragteren Maximumeffekt, was auch mit den empirischen Befunden besser ubereinstimmt. (Der Effekt samt seiner Abhangigkeit von Intelligenz, Alter und u bung ist ausfuhrlicher beschrieben in Frank, 1962, S.104 ff.) Es liegt hier einer der wenigen Hinweise vor, dass irgendwo im Zentralnervensystem eine digitale, binare Codierung von Nach- richten erfolgen konnte. (Andere, physiologische Hinweise gibt Schwartzkopff, 1962.) In unserem Zusammenhang interessiert der Effekt nur als Beispiel dafur, dass ein brauchbares Modell nicht nur zur anschaulichen Interpretation eines empirisch ge- fundenen Sachverhalts (im Beispiel die Ergebnisse von Attneave, Noble und Arnoult) dient, sondern unerwartete Folgerungen nahelegen kann. Werden diese durch Experi- mente (die sonst wohl nie angestellt worden waren !) bestatigt, wie in unserem Beispiel, dann hat sich das Modell als heuristisch wertvoll erwiesen.

3.2. Praktische Aufgaben

Die technische Aufgabe der Kybernetik ist die Entlastung des Menschen von solchen geistigen Arbeiten, die er gerne abgeben mochte. Dazu ist es oft - aber nicht immer - Lilivatur S. 96

Page 99: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

90 H E L M A R F R A N K

notwendig zu wissen, wie der Mensch die zu automatisierenden Nachrichtenverarbei- tungsprozesse selbst vollzieht. Dass die Kenntnis des menschlichen Verhaltens nicht immer erforderlich ist, beruht darauf, dass dieselbe Aufgabe verschieden losbar sein kann und die optimale Losung vom Werkstoff und Werkzeug abhangt. Da beispiels- weise die Steinbuchsche Lernmatrix technische Aufgaben erfiillen soll, ergeben sich hieraus - und nicht aus der moglichst getreuen Nachbildbarkeit der klassischen Konditionierung - die Kriterien ihrer optimalen Realisierung, was wegen der Ver- wendung anorganischer Bauelemente nicht auf dasselbe hinauslaufen muss (vgl. Bok, 1961). Um noch ein anderes Beispiel zu wahlen: dass ein Mensch ein strenges Ver- fahren zur Berechnung einer Quadratwurzel beniitzt, steht nicht im Widerspruch zu der Tatsache, dass iterative Verfahren fur die heutigen programmgesteuerten Rechen- automaten zweckmassiger sind.

Trotz dieser Gegenbeispiele sind aber die Falle, in denen die Kenntnis der mensch- lichen Informationsverarbeitungsweise fur die kybernetische Technik wichtig ist, keineswegsnur Ausnahmen. Beispielsweise muss von Zeichenlesemaschinen die Bildung von lnvarianten verlangt werden. Sie miissen zum Beispiel

M M M

z W

jeweils als dasselbe Zeichen erkennen, davon aber sowohl

als auch

streng unterscheiden. In der menschlichen Wahrnehmung spielen namlich Invarianz- bildungen gegeniiber Translationen, Ahnlichkeitstransformationen, affinen Transfor- mationen und Scherungen eine erhebliche Rolle, in geringem Masse jedoch solche gegeniiber Rotationen, wie das Beispiel zeigt.

Steinbuchs nichtdigitale Lernmatrizen gestatten nun Invarianzbildungen gegeniiber affinen Transformationen, Translationen und Scherungen, erkennen aber Rotationen (Steinbuch und Frank, 1961). Die technische Realisierung dieses Modells ist im Vortrag von P. Miiller behandelt. Wir beschranken uns hier auf eine theoretische Betrachtung dieser Anordnung*.

Wie bei der schon in Abschnitt 3.1.3 erwahnten binaren wird auch bei der nicht- digitalen Lernmatrix das zu erkennende Objekt (z.B. ein Dreiklang) durch Merkmale (z.B. Tone) erfasst, die aber im Gegensatz zur binaren Lernmatrix nicht entweder vorliegen oder fehlen, sondern in verschiedener ‘Intensitat’ (im Beispiel des Drei- klanges : Frequenzen der drei Tone) vorliegen konnen.

Im Falle eines Dreiklangs gibt es nur endlich viele, namlich drei, Merkmale. Im Falle eines Buchstabens, etwa des W, kann man das Zeichen als Schaubild einer Fun ktion auffassen und diese sogenannte ‘Merkmalfunktion’ an endlich vielen Stellen - beim Buchstaben W geniigen bei h e a r e r Interpolation 5 Stellen - abtasten (vgl. Fig. 7A). Auf jeden Fall kommt man zu einem endlichen System reeller Zahlen p , ent-

* Vgl. P. Muller: ‘Klassen und Eigenschaften von Lernmatrizen’, in diesem Bande, S.97.

Page 100: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

INFORMATIONSPSYCHOLOGIE U N D N A C H R I C H T E N T E C H N I K 91

Vektor 8

komponenien) PY I

Fig. 7. (a) Abtastung einer Merkmalfunktion an n Stellen. (b) Intensitat von n (z.B. von einer Merkmalfunktion abgetastete) Merkmale, die als Komponenten

eines n-dimensionalen Vektors interpretierbar sind.

weder als Resultat der Abtastung (Fig. 7B), oder weil sie von vorneherein gegeben sind (drei Frequenzen des Dreiklangs). Diese Zahlen p lassen sich als Komponenten eines Vektors 5; auffassen und im dreidimensionalen Falle anschaulich darstellen. Eine Multiplikation aller Vektorkomponenten mit derselben Zahl, was einer affinen Trans- formation der Merkmalfunktion bzw. einer Transponierung des Dreiklangs in eine andere Tonart entspricht, bedeutet in der Vektordarstellung lediglich eine Anderung des Betrags des Vektors P bei konstanter Richtung. Es konnen insbesondere einige Vergleichsvektoren 21, 22, & und 2 4 die Lange 1 haben (Fig. 8). Sie mogen nor-

Fig. 8. Reprasentation von vier Standardobjekten (Muster, z.B. Akkorde) durch vier Einheitsvektoren g'i in einem dreidimensionalen Merkmalraum. Die drei Vektorkomponenten konnen im Falle eines

Dreiklangs beispielsweise die Frequenzen der drei Tone des Dreiklangs darstellen.

Literafur S. 96

Page 101: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

92 H E L M A R F R A N K

mierte Reprasentanten gewisser Standardobjekte (z.B. vier ausgewahlte Dreiklange aus dem Kontinuum aller moglichen Akkorde) sein. Wird ein solches Objekt ein wenig gestort, indem sich z.B. das Verhaltnis der Frequenzen ein wenig andert, dann ergibt sich in der Vektordarstellung von Fig. 8 eine kleine Richtungsanderung des Vektors (und eventuell zusatzlich eine Betragsanderung). Solange die Storung, also im Vektor- bild die Richtungsanderung, nicht allzu gross ist, erkennt der Mensch das Objekt wieder. Bei grosser Storung kann er es jedoch auch falschlich als ein anderes Standard- objekt (Muster) erkennen. Im Modell kann dieses Verhalten dadurch erfasst werden, dass die inneren Produkte des Vektors j , der einem beobachteten Objekt (einem aktuellen Dreiklang) entspricht, mit den Reprasentanten 21 der Muster gebildet wird und 5; als storungsbedingte Abweichung desjenigen 21 angesprochen wird, mit dem sich das grosste innere Produkt ergab (zu dem also der Winkel am kleinsten ist). Die Muster von Fig. 8 teilen auf diese Weise den Raum in vier Sektoren auf: jedes Objekt, das durch einen Vektor im Sektor von & reprasentiert ist, wird als das durch i~ reprasentierte Muster oder als Variante desselben erkannt. (Das stark umrandete Kugeldreieck kennzeichnet den Bereich nichtnegativer Vektorenkomponenten; nur dieser ist im Beispiel des Dreiklangs von Interesse.) Die Invarianz der Erkennung gegenuber affinen Transformationen, also gegenuber Verlangerungen oder Verkur- zungen eines Vektors, ist offenkundig, da ja dabei der Sektor nicht verlassen wird. (Die Invarianzbildungen gegeniiber Translation und Scherung sollen hier nicht be- handelt werden.)

Das Modell gibt bis hierher allerdings ein wichtiges menschliches Verhalten noch nicht wieder. Das Modell ‘erkennt’ namlich jedes beliebige, auch zufallig zustande- gekommene Objekt als eines der vier eingepragten Muster, wahrend doch fur den Menschen zufallig Zustandegekommenes fast immer als ‘Unordnung’ erscheint, weil er nur einen kleinen Streuungsbereich urn die Muster als ‘Storungen’ derselben zulasst. Dieses Verhalten kann durch das Modell erfasst werden, indem man fur den Winkel

tP3

Fig. 9. Einzugsbereiche von vier Perzeptionsformen, begrenzt durch Schwelle und Extremwertbe- stimmung.

Page 102: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

INFORMATIONSPSYCHOLOGIE U N D N A C H R I C H T E N T E C H N I K 93

zwischen I; und & eine Schwelle (z.B. 10” in Fig. 9) festlegt, die nicht iiberschritten werden darf, falls I; noch als Variante von 2i interpretiert werden soll. Alles was ausserhalb der ‘Inseln der Ordnung’ in Fig. 9 fallt, wird dann als Unordnung inter- pretiert. Die Anordnung Iasst sich als Modell von Perzeptionsvorgangen noch ver- bessern, wenn man annimmt, dass die ,& nicht alle von vorneherein die Lange 1 haben und unverandert beibehalten, sondern von der Lange 0 aus wahrend eines Lernvor- gangs wachsen und bei zu seltener Benutzung allmahlich wieder kiirzer werden. Da dann auch das Verhaltnis der inneren Produkte der 2s mit einem Vektor nicht mehr ausschliesslich von den Winkeln zu diesem Vektor abhangig ist, sondern auch noch von den Langen der &, sind bei festein Schwellenwert die erlaubten Streuungs- bereiche um die & bei schon gut eingelernten oder haufig benutzten 26 gross, bei erst neu gelernten oder selten benutzten & dagegen klein. (Die p’ miissen vor der Produktbildung normiert oder der Schwellwert proportional zu /;/ gemacht werden). Dies entspricht dem menschlichen Verhalten.

Gelegentlich wird es zweckmassig sein, die Schwellen verschiedener Gs unterschied- lich zu erhohen oder zu senken, zum Beispiel wenn auf Grund vorangegangener Aus-

Merkmale des Aussenweltobjekts E

n Rezeptoren rl

Perzept ions- ereignis P

normiertes Perzeptions-

ereignis P’

I Perzeptions- formen Sl

(normiert)

Gedachtnis Fig. 10. Nichtdigitaler Perzeptor mit Schwellen- und Extremwertschaltung als Baustein eines Wahr-

nehmungs-Modells mit Riickwirkung.

wertungen angebotener Informationen dieses oder jenes Objekt nicht in Frage kommt, andererseits aber mit einer extrem grossen Storung gerechnet wird. Fig. 10 zeigt eine Anordnung, die ein solches Verhalten, das bereits etwas vom Zusammenspiel von Literafur S. 96

Page 103: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

94 H E L M A R F R A N K

Perzeption und Apperzeption beriicksichtigt, in einem nachrichtentechnischen Modell erfasst. (Uber solche Erweiterungsmoglichkeiten des Modells wird in eineni der nachsten Hefte der Grundlagenstudien aus Kybernetik und Geisteswissenschuft berichtet .) Es braucht nicht begrundet zu werden, dass eine solche Anordnung nicht nur als Modell einer Wahrnehmungstheorie sondern insbesondere auch technischen Aufgaben dienen kann.

3.3. Ein philosophischer Beriihrungspunkt zwischen Informationspsychologie und Nachrichtentechnik

Dass Information in der Kybernetik als Information und weder als Materie noch als Energie verstanden wird, ist Inhalt eines vielzitierten Satzes von Wiener (1948, S. 155). Die kybernetische Technik spricht demgemass auch weniger von Kausalrelationen als von informationellen Beziehungen. Beispielsweise sol1 innerhalb eines Regelkreises eine Tnformation uber den 1ST-Wert unter Beriicksichtigung eines Sollwerts zu Steuer- massnahmen verarbeitet werden. Eine solche Betrachtungsweise ist haufig bequemer und heuristisch sehr vie1 besser als eine kausale Analyse, die jedoch ebenfalls moglich ware : tatsachlich beruht ja vielfach die Information auf einer Umkehrung der Kausal- verknupfung, wenn namlich aus einem Zeichen auf dessen Ursache zuruckgeschlossen wird, was bekanntlich trotz strenger Gultigkeit des Kausalgesetzes nicht eindeutig moglich sein muss (Frank, 1962a). In Wahrheit entspricht es der inneren Erfahrung, dass der Mensch solche Ruckschlusse auf Ursachen standig vornimmt, wahrend um- gekehrt der Automat zweifellos kausul determiniert ist (auf strittige Sonderfalle, bei denen ein ‘Zufallszahlengeber’, der z.B. von einer radioaktiven Substanz gesteuert wird, Verwendung findet, braucht hier nicht eingegangen zu werden); er wird jedoch in Analogie zum menschlichen Bewusstsein betrachtet ‘als ob’ er Znformationen aus- werte. Ein besonders fruchtbares heuristisches Prinzip der kybernetischen Technik entstammt also dem Bereich der lnformationspsychologie, oder, allgemeiner, jeder nicht radikal-behaviouristischen psychologischen Richtung. Im Zusammenhang damit kann noch auf ein anderes Prinzip hingewiesen werden, das wohl ausnahmslos bei jeder psychologischen Richtung eine Rolle spielt : das Prinzip der Teleologie. Philo- sophische Analysen haben Beziehungen dieses Prinzips zum Regelkreis aufgewiesen (Klaus, 1961; Frank, 1962a). Es ist bemerkenswert, dass sich hier die Moglichkeit anbahnt, das zielgerichtete Verhalten, das den Psychologen interessiert, auf das nach- richtentechnische Prinzip der Regelung zuruckzufuhren, oder umgekehrt technische Regelungssysteme in Analogie zur Psychologie teleologisch zu interpretieren. Damit wird ubrigens auch evident, dass die Teleologie keine ubernaturlichen Krafte voraus- setzen muss.

Z U S A M M E N F A S S U N G

Fur die Psychologie, sowie auch fur die Nachrichtentechnik sind inforniationelle Beziehungen von grosserer Wichtigkeit, als materielle und energetische Angaben. Demgemass ist die Informationspsychologie bestrebt, quantitative psychologische

Page 104: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

INFORMATIONS P S Y C H 0 LOG I E U N D N A C H RI CH TENTE C H N I K 95

Begriffe durch Anwendung nachrichtentechnischer Begriffe preziser zu formulieren. Solche begrifliche Modelle sind durch die Kurze, womit sie die Ubermittlung vieler Versuchsergebnisse moglich machen, gerechtfertigt.

Begriffliche Modelle konnen durch Organogramme (analog den Blockschaltbildern in der Nachrichtentechnik) veranschaulicht werden. An Hand der Versuche von Attneave, Noble und Arnoult wird es gezeigt, wie diese anschaulichen Modelle heuristisch an die Formulierung untersuchbarer Hypothesen beitragen konnen.

Auch konnen anschauliche Modelle mit nachrichtentechnischen Methoden gebaut werden. Technische Modelle dienen nicht nur fur didaktische Zwecke und (in kom- plizierten Fallen) zur Nachbildung menschlichen Verhaltens welches mathematisch nicht losbar ist, jedoch als von der ubermittelten Nachricht bedingt betrachtet wird, aber auch fur die Entlastung des Menschen, durch Automatisierung geistiger Arbeiten.

Dieser Gedachtengang kann durch Lernmatrizen fur Mustererkcnnungsprozesse veranschaulicht werden. Ein Modell wird beschrieben, welches gegenuber Trans- lationen, Ahnlichkeitstransformationen, affinen Transformationen und Scherungen Invarianz zeigt, doch Ahnlichkeitsrotationen und Transformationen erkennt. Durch eine geringe Erweiterung dieses Stromkreises konnen einige andere psychologisch interessante Vorgange durch Modelle dargestellt werden.

S U M M A R Y

INFORMATION P S Y C H O L O G Y A N D TELECOMMUNICATION

Both psychology and telecommunication are more interested in informational than in material-energetic data. Consequently, information psychology tries to give the quan- titative psychologic principles a more precise formulation by applying concepts of telecommunication. Such conceptual models are justified by the shortness they make possible in presenting numerous results of experiments.

The conceptual models can be illustrated by organograms (as analogies to the block diagram of telecommunication). Taking as an example the experiments of Attneave, Noble and Arnoult it is shown how these perceptual models can heuristically con- tribute to form examinable hypotheses.

On the other hand, perceptual models can be obtained by methods of telecommuni- cation. Technical models may not only be desirable for didactic purposes and (in complicated cases) for simulating behavioural attitudes which it is difficult to overcome mathematically and which are looked upon as functions of information given, but also for the exoneration of man (automation of mental processes).

These ideas are illustrated by the learning matrix for pattern recognition processes. A device is described which works invariant to translation, affine transformation and skewness, indicating, however, rotations and transformations of similarity. By slight generalization of this circuit some more psychologically interesting processes can be represented in the way of models.

Literatur S. 96

Page 105: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

96 H E L M A R F R A N K

L I T E R A T U R

ATTNEAVE, F., (1959); Applications of Information Theory to Psychology ( A summary of basic concepts,

BENSE, M., (1956/60); Aesthefica, II-IV. Baden-Baden, AGIS-Verlag. BOK, S. T., (1961); Beobachtungen zentralnervoser Strukturen im Hinblick auf Gedachtnisfunk-

tionen. Grundlagenstudien aus Kybernetik und Geisteswissenschaft, 2, 97-1 10. FRANK, H . , (1 959a); Grundlagenprohleme der Informationsasthetik und erste Anwendung aigdie Mime

Pure. Stuttgart, Dissertation Technische Hochschule (Buchhandel: Hess, Waiblingen). FRANK, H., (1959b); Thtorie informationelle de la realisation et perception dans l’art du mime.

Cahiers d’Etudes de Radio-Tilivision, 24, 377-387. FRANK, H., (1961a); Zum Problem des vorbewussten Gedachtnisses. Grundlagenstudien aus Kyber-

netik und Geisteswissenschaft, 2, 17-24. FRANK, H., (I961 b); Die Lernmatrix als Modell fur Informationspsychologie und Semantik. Lernetide

Automaten. H. Billing, Herausgeber. Munchen, R. Oldenbourg (S.101-108). FRANK, H., (1962a); Kausalitat und Information als Problemkomplex einer Philosophie der Kyberne-

tik. Grundlagenstudien aus Kybernetik und Geisteswissenschaft, 3, 25-32. FRANK, H., ( 1 962b); Kybernetische Grundlagen der Padagogik. (Information et pkdagogie). Einr

Einfiihrung in die Informationspsychologie und ihre philosophischen, niathematischen und physiolo- gischen Voraussetzungen. Baden-Baden, AGIS-Verlag; Paris, Gauthier-Villars.

FRANK, H., (1 962c); Kybernetik - Wesen und Wertung. Kybernetik und Organisationsniethodik. H. Mader, Herausgeber. Vortrage des I . Quickborner Symposions 1962. Quickborn bei Hamburg, Verlag Schnelle (im Druck).

GOUBEAU, I., (1960); Ein informationsasthetischer Ansatz zur Deutung der griechischen Musikge- schichte. Grundlagenstudien aus Kybernetik und Geisteswissenschaft, 1 , 129-1 36.

GUNTHER, G., (1957); Das Bewusstseiti der Maschitien. Eine Metaphysik der Kybernetik. Krefeld, AGIS-Verlag.

HOFSTATTER, P. R., (1957); Psychologie. Frankfurt, Fischer-Lexikon No. 6. HYMAN, R., (1953); Stimulus information as a determinant of reaction time. Journal of Experimetital

KLAUS, G., (1961); Kybernetik in philosophischer Sicht. Berlin, Dietz Verlag. LUCE, R. D., Editor, (1960); Development in Mathematical Psychology. Illinois, The Free Press of

MERKEL, J., (1885); Die zeitlichen Verhaltnisse der Willenstatigkeit. Philosophische Srudien, 2, 73-127. MILLER, G. A., BRUNER, J. S., AND POSTMAN, L., (1954); Familiarity of letter sequences and tachisto-

MOLES, A. A., (1 958); Thiorie de I’lnformation et Perception Esthitique. Paris, Flamrnarion. NEWELL, A,, AND SIMON, H., (1961); Computer simulation of human thinking. Science, 134,201 1-2017. ROHRACHER, H., (1960); Einfiihrung in die Psychologie. Wien, Urban und Schwarzenberg. 7. Aufl. ROHRACHER, H., ( I 961); Regelprozesse im psychischen Geschehen. Sitzungsberichte der dstevreichi-

schen Akademie der Wissenschaften, 23b, No. 4, 3-21. SCHOPENHAUER, A,, ( I 847); Uber die vierfache Wurzel des Satzes vom zureichendeti Grunde. Disserta-

tion, I81 3. 2. Aufl. Frankfurt, Hermann’sche Buchhandlung. SCHWARTZKOPFF, J., (1 962); ifber paradoxe Beobachtungen an akustischen Nervenzellen rnit digitaler

Zeiteinteilung. Grundla~Tenstudien aus Kybernetik und Geisfeswissenschaft, 3, 97-1 09 (im Druck). STEINBUCH, K., (1961a); Die Lernmatrix. Kybernetik, 1, 3645. STEINBUCH, K., (I961 b); Automat und Mensch. Uber menschliche und maschinelle Intelligenz. Berlin,

STEINBUCH, K., (1 962) ; Bewusstsein und Kybernetik. Grundlagenstudien aus Kybernetik und Geistes-

STEINBUCH, K., UND FRANK, H., (1961); Nichtdigitale Lernmatrizen als Perzeptoren. Kybernetik,

VON CUBE, F., ( I 960); Zur Theorie des mechanischen Lernens. Grundlagenstudien aus Kybernetik und

WIENER, N., (1948); Cybernetics or control and communication in the animal and the machine.

methods and results.) New York, Henry Holt and Company.

Psychology, 45, 188-196.

Glencoe.

scopic identification. Journal of Genetic Psychology, 50, 129-1 39.

Springer.

wissenschaft, 3, 1-12.

1, 117-124.

Geisteswissenschaft, 1, 143-144.

Actualitis scientifques et industrielles. Paris, Hermann et Cie.

Page 106: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

97

Klassen und Eigenschaften von Lernmatrizen

P. M U L L E R

Institut fiir Nachrichtenverarbeitung und Nachrichteniibertragung der Technischen Hochschule Karlsruhe, Karlsruhe (Deutschland)

I N H A L T

1 . Einleitung. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 2. Prinzip der Lernrnatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 3. Grundbegriffe der Lernrnatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

3.1. Eigenschaftssatz, 100 - 3.2. Bedingte Verkniipfung, Erregung, 100 - 3.3. Extrernwertbe-

4. Klassen von Lernrnatrizen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5 . Eigenschaften von Lernrnatrizen . . . . . . . . . . . . . . . . . . . . . . . . . . 108 6. Realisierung von Lernrnatrizen . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7. Anwendung von Lernmatrizen . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1 Literatur. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

stimrnung, 101 - 3.4. Kontradiktorische Doppelspalten, 102

4.1. Lernrnatrix fur binare Signale, 103 - 4.2. Lernrnatrix fur nichtbinare Signale, 105

I . E I N L E I T U N G

Die wesentliche technische Leistung des neunzehnten Jahrhunderts bestand in der Schaffung von Maschinen, die dem Menschen mechanische Arbeit abnehmen. Bei- spiele hierfiir sind Dampfmaschinen, Elektromotoren oder Verbrennungsmotoren.

Eine kennzeichnende technische Tendenz des zwanzigsten Jahrhunderts ist es, Maschinen zu erfinden, welche dem Menschen Sinnes- und Nervenleistung abnehmen. Es sei an die Entwicklung moderner programmgesteuerter Rechenautomaten erinnert, oder an Automaten, die Sprachen ubersetzen, geschriebene Zeichen lesen oder ge- sprochene Sprache verstehen konnen. Hierbei zeigt es sich, dass technische Systeme den organischen Systemen manchmal uberlegen sind, z.B. im Hinblick auf Geschwin- digkeit oder Zuverlassigkeit, manchmal aber auch unterlegen sind, was die Lernfahig- keit oder das Vermogen, Invarianten zu bilden, anbetrifft.

In organischen Systemen fungieren Neuronen als Schaltungsbausteine, wahrend in anorganischen Systemen die Schaltungen aus Halbleitern, ferromagnetischen Bau- elementen wie Ringkernen oder Transfluxoren etc. aufgebaut sind. Auch die Schal- tungsprinzipien lassen wesentliche Unterschiede erkennen. In anorganischen Systemen ist gewohnlich eine klare Funktionstrennung, d.h. eine Unterscheidung der Schaltungs- teile, die der Informationsspeicherung oder der Ausfuhrung logischer Operationen dienen, moglich. In organischen Systemen kennt man ‘Schaltungen’ zur Bildung be- dingter Reflexe, in denen die logische Operation eine Funktion der Eingangsinforma- Litcratur S. 113

Page 107: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

98 P. M U L L E R

tion ist, wo also die logische Operation und die Speicherung im gleichen Schaltungsteil geleistet werden.

Modelle, die einfache Verhaltensweisen organischer Systeme darstellen, verwenden Schaltungen zur Nachbildung bedingter Reflexe. Um jedoch umfangreichere Lern- prozesse darstellen zu konnen, wird fur diese Modelle ein Schaltungsaufwand erforder- lich, der die Grenzen technischer Realisierungsmoglichkeiten uberschreitet.

Im folgenden wird ein Schaltungsprinzip, die von Steinbuch (196 la) angegebene ‘Lernmatrix’ beschrieben, die eine Modelldarstellung fur gewisse Lernvorgange* ge- stattet und uberdies mit einfachen technischen Mitteln realisierbar ist. Je nach Art der zu verarbeitenden Information lassen sich Klassen von Lernmatrizen mit typischen Funktionen unterscheiden. Schliesslich wird ein Uberblick iiber Eigenschaften und Realisierungsmoglichkeiten von Lernmatrizen gegeben.

2. P R I N Z I P D E R L E R N M A T R I X

Die Lernmatrix (Steinbuch, 1961 a) ist eine matrixformige Schaltungsstruktur, in der eine grosse Anzahl von ‘bedingten Verkniipfungen’ (siehe weiter unten !)so zusammen-

Lernphose

So tz Ngenschoften [el

1 - Bedeu- tungen

b ---c

Be1 Nngabe von [ e l und b bilden sich ,Bedingte Verknupfungen’

Satz Ergenschften [e]

No& Eingobe yon [ e l ergibt sich die

,,4hnlichste”EJedeutung b

sotz Efgenschof ten

t +

re1

I Noch Eingobe von b

ergfbt sich [ e ]

Fig. 1 . Zum Prinzip der Lernmatrix (Steinbuch, 1961b).

gefasst wird, dass einige Lernvorgange dargestellt werden konnen. An Hand der Fig. 1 sei das Prinzip einer Lernmatrix naher erlautert.

Kennzeichnend sind zwei Bundel sich kreuzender Leitungen, welche die n Spalten

* Zernanek, 1961; Piske, 1962.

Page 108: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N 99

bzw. die m Zeilen der Matrixschaltung bilden. In den Kreuzungspunkten von Zeilen und Spalten sind Verkniipfungselemente angeordnet, in denen sich ‘bedingte Ver- knupfungen’ bilden, deren Eigenschaften im Abschnitt 3.2 naher beschrieben werden. Hier sei zunachst festgestellt, dass dem Aufbau einer ‘bedingten Verkniipfung’ etwa die synaptischen Vorgange entsprechen, die zur Bildung ‘bedingter Reflexe’ fiihren. Den Spalten der Matrixschaltung wird ein Satz von Eigenschaften { e } , bestehend aus n Eigenschaften el, e2, . . . e, (vergl. Abschnitt 3) zugefiihrt.

Es lassen sich zwei Betriebsarten der Lernmatrix unterscheiden, namlich die Lern- phase und die Kannphase. Wahrend der Lernphase erfolgt die Herstellung der Zuord- nung einer Bedeutung br zu einem den Spalten zugefuhrten Eigenschaftssatz {e}r durch Eingabe der zuordnenden Bedeutung, wobei die Bedeutung dadurch reprasentiert wird, dass man der i-ten Zeile gleichzeitig mit dem eingegebenen Eigenschaftssatz ein geeignetes Signal zufuhrt. Bei der Eingabe des Eigenschaftssatzes {e>t und des die Bedeutung ba darstellenden Zeilensignals bilden sich ‘bedingte Verkniipfungen’ durch automatische Einstellung der Verkniipfungselemente in den Kreuzungspunkten der i-ten Zeile und samtlicher Spalten. Dieser Vorgang kann fur verschiedene Eigen- schaftssatze unter Ansteuerung anderer Zeilen solange wiederholt werden, bis die Verkniipfungselemente samtlicher Zeilen eingestellt sind. Wie die Einstellung der Verkniipfungselemente im einzelnen erfolgt, sol1 im Abschnitt 3.2 eingehender be- schrieben werden. Zunachst sei festgehalten, dass nach Abschluss der Lernphase die Eigenschaftssatze den Bedeutungen zugeordnet worden sind, wobei diese Zuordnung in der Tatsache besteht, dass die Eigenschaftssatze in einer bestimmten Zeile ‘ge- speichert’ oder gelernt wurden.

In der Kannphase vermag die Lernmatrix bei Eingabe eines bereits gelernten Eigen- schaftssatzes in die Spalten der Matrixschaltung die gelernte Bedeutung (vermoge einer Extremwertbestimmungsschaltung) durch ein elektrisches Signal anzuzeigen. Gehort der in der Kannphase angebotene Eigenschaftssatz nicht zum Repertoire der gelernten Eigenschaftssatze, so wird - dies ist ein wesentliches Kennzeichen der Lern- matrix - die Bedeutung des dem angebotenen Eigenschaftssatz ahnlichsten gelernten Eigenschaftssatzes angezeigt. Dieser Fall (Angebot eines nicht zum Repertoire der gelernten Eigenschaftssatze gehorenden Eigenschaftssatzes) kann leicht dadurch ein- treten, dass der anzubietende Eigenschaftssatz wohl in der vorangegangenen Lern- phase gelernt worden war, an den Spalteneingangen der Lernmatrix jedoch gestort vorliegt, d.h. vom ursprunglichen Eigenschaftssatz in einer oder in mehreren Eigen- schaften abweicht. Diese Betriebsweise, in der also die Bedeutung eines angebotenen Eigenschaftssatzes angegeben wird, bezeichnet man als { e } + b-Betrieb.

Auch die umgekehrte Betriebsweise, namlich der b + {e}-Betrieb, ist in der Kann- phase moglich. Dabei wird einer Zeile ein elektrisches Signal zugefiihrt, welches eine bestimmte Bedeutung b reprasentiert, worauf an den Spalten der Lernmatrix der exakte gelernte Eigenschaftssatz entnommen werden kann.

3. G R U N D B E G R I F F E D E R L E R N M A T R I X

Im folgenden sollen einige Grundbegriffe genauer besprochen werden, um mit deren Literatur S. 113

Page 109: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

100 P. M U L L E K

Hilfe die Wirkungsweise der Lernmatrix erlautern zu konnen. Ausserdem gestatten diese Grundbegriffe eine einfache Klassifikation von Lernmatrizen.

3.1. Eigenschaftssatz Ein Eigenschaftssatz {e}j ist ein Satz von n zunachst beliebigen Eigenschaften ejl,

ejz, . . . ejn. Fur die Darstellungsweise der Eigenschaften bestehen zwei Moglichkeiten. Die Eigenschaften lassen sich durch Binarzahlen ausdrucken, z.B. in der Weise, dass der Binarzahl ‘ I ’ das Vorhandensein einer Eigenschaft entspricht (ejv = 1) und der Binarzahl ‘0’ das Fehlen einer Eigenschaft (elv = 0). So geht der aus binar dargestellten Eigenschaften bestehende Eigenschaftssatz in ein Binarcodewort uber ; beispielsweise steht fur einen Eigenschaftssatz {e}j = 101 . . . 1 . . . 0. Man erhalt eine Vektordar- stellung der Eigenschaftssatze, wenn man die Eigenschaften als Komponenten eines Eigenschaftsvektors im n-dimensionalen kartesischen Vektorraum auffasst. Fur binar angegebene Eigenschaften ergibt sich ein n-dimensionaler Kubus, dessen Eckpunkte die moglichen, aus n Eigenschaften darstellbaren Eigenschaftssatze bilden. Mehr Eigenschaftssatze als Eckpunkte sind nicht moglich. Lernmatrizen, die derartige binare Eigenschaftssatze verarbeiten, heissen Lernmatrizen fur biniire Signale.

Nun konnen die Eigenschaften auch nichtbinare Werte annehmen, welche etwa die physikalischen Merkmale eines durch eben diesen Eigenschaftssatz beschriebenen Objektes sind. Ebenso konnen auch die Abtastwerte einer Zeitfunktion oder die Zu- standsgrossen einer Aussenweltsituation kontinuierliche Eigenschaftswerte eines nun nichtbinaren Eigenschaftssatzes sein. Auch die kontinuierlichen Eigenschaftswerte konnen ebenfalls als Komponenten eines Eigenschaftsvektors aufgefasst werden, der im nichtbinaren Fall beliebige Positionen im Vektorraum einnehmen kann und nicht den im binaren Fall geltenden Beschrankungen unterliegt. Lernmatrizen, die derartige Eigenschaftssatze zu verarbeiten vermogen, werden als Lernmatrizen fur nichtbiniire Signale bezeichnet.

Korrekterweise hatte eine Unterscheidung zwischen quantisierten und kontinuier- lichen Eigenschaftswerten durchgefuhrt werden mussen, indessen sind der Unter- scheidbarkeit unendlich kleiner Stufungen der Eigenschaften technische Grenzen ge- setzt. Ebenso Iasst sich die Quantisierung der Eigenschaftswerte in mehr als zwei Stufen durch eine einfache Binarcodierung der Stufen wieder auf die Binardarstellung zuriickfuhren.

3.2. Bedingte Verknupfung, Erregung In den Kreuzungspunkten der Zeilen und Spalten der Matrixschaltung sind Ver-

knupfungselemente angeordnet, mit deren Hilfe ‘bedingte Verknupfungen’ gebildet werden konnen. Als ‘bedingte Verknupfung’ wird eine Funktion bezeichnet, bei der sich die logische Verkniipfung in Abhangigkeit von der Vorgeschichte andert. Die Formation der bedingten Verknupfung erfolgt wahrend der Lernphase durch eine entsprechende Einstellung der Verknupfungselemente. Folglich muss ein Verknup- fungselement eine (reversibel oder irreversibel) einstellbare physikalische Grosse mit einer* verknupfenden Eigenschaft aufweisen. Der Betrag dieser physikalischen Grosse

* Zeilen und Spalten.

Page 110: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N 101

des Verkniipfungselementes im Kreuzungspunkt der i-ten Zeile und v-ten Spalte sei mit xtv bezeichnet. Es ist erforderlich, dass die Verkniipfung bei einer Unterbrechung der Energiezufuhr (Abschalten des Gerats) nicht verandert wird.

Die Verkniipfungselemente sind einstellbar, sie lassen sich folglich als Speicher- elemente fur die Eigenschaften eines gelernten Eigenschaftssatzes verwenden. Es wird im folgenden noch gezeigt, in welcher Weise die Einstellung der Verkniipfungselemente wahrend der Lernphase erfolgt.

In der Kannphase ({e} --f b-Betrieb) werden die Eigenschaften eines angebotenen Eigenschaftssatzes in die Spalten der Lernmatrix gegeben. Durch die verkniipfende Wirkung der Verkniipfungselemente entstehen in den Zeilen der Lernmatrix Signale unterschiedlicher Grosse ; man sagt, die Zeilen werden ‘erregt’. Jedes Verkniipfungs- element im Kreuzungspunkt der v-ten Spalte und der i-ten Zeile liefert einen Erre- gungsbeitrag A O t , zur Gesamterregung 0.I der i-ten Zeile.

Der Erregungsbeitrag des Verkniipfungselementes ist sowohl von der eingestellten Verkniipfung xiv als auch von der Eigenschaft ejv des eingegebenen Eigenschaftssatzes {e}j abhangig. Fur den Erregungsbeitrag laisst sich somit angeben :

O,, = F {ej,; x i , } ; i = 1,2,. . . m Beriicksichtigt man nun noch, dass die Verkniipfungselemente der i-ten Zeile wahrend der Lernphase abhangig von den Eigenschaften des zu lernenden Eigenschaftssatzes eingestellt werden, dass also xiv eine Funktion von ecv ist, dann gilt

O,, = F {ej,; x tv (el,)} ; i = 1,2, . . . rn Dies ist eine Beziehung mit zwei ‘Freiheitsgraden’, namlich einmal dem in der Lern- phase gebildeten funktionalen Zusammenhang zwischen der Verkniipfung xiv und der gelernten Eigenschaft et,, und zum anderen dem der Kannphase zu Grunde liegenden Zusammenhang zwischen der in der Kannphase angebotenen Eigenschaft ejv und der Verkniipfung xz,, wobei beide Beziehungen zunachst noch beliebig seien. Erst durch speziell festgelegte funktionale Zusammenhange zwischen xiv und etv einerseits, und zwischen xtv und ej, andererseits ergeben sich typische Klassen von Lernmatrizen. Hierzu ist zu bemerken, dass dieser funktionale Zusammenhang nicht in einem ein- maligen Lernvorgang hergestellt werden muss. Zumindest fur die binare Lernmatrix (siehe Abschnitt 4.1) ist der Fall des wiederholten Lernens von eventuell gestorten Zeichen besonders interessant. In erster Linie hangen diese Beziehungen von der Art der verwendeten Verkniipfungselemente ab. Die Gesamterregung der i-ten Zeile ist die Summe der Erregungsbeitrage der einzelnen Verkniipfungselemente :

11.

et = ZAO,, i = 1,2, . . . m V=l

Dieses Summationsgesetz ist in der Matrixstruktur begriindet, und zwar durch die Tatsache, dass samtliche Verkniipfungselemente einer Zeile miteinander verbunden sind. Das Summationsgesetz gilt nur, wenn samtliche Eigenschaften eines Eigenschafts- satzes gleichzeitig in die Spalten der Matrixschaltung eingegeben werden.

3.3. Extremwertbestimmung Samtliche Zeilen einer Lernmatrix sind mit einer Extremwertbestimmungsschaltung

Litrralur S. I13

Page 111: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

102 P. M U L L E K

verbunden. Diese Extremwertschaltung tritt nur in der Kannphase, und hier lediglich im { e } -+ b-Betrieb in Funktion. lhre Aufgabe besteht darin, die Zeile maximaler Erregung zu erkennen und durch ein Signal anzuzeigen.

Uber technische Ausfuhrungsformen von Extremwertschaltungen wird im Abschnitt 6 berichtet.

3.4. Kontradiktorische Doppelspalten Fur jede Eigenschaft eines Eigenschaftssatzes, gleichgultig, ob sie binar verschlusselt

oder eine kontinuierliche Grosse ist, wird (abgesehen von speziellen Verfahren der technischen Realisierung) ein Spaltenpaar vorgesehen. Hierbei ist kennzeichnend, dass immer nur ein Verknupfungselement der zwei Kreuzungspunkte eines Spaltenpaares mit einer Zeile eingestellt ist. Daraus leitet sich die Bezeichnung dieser Spaltenpaare als ‘kontradiktorische Doppelspalten’ ab. Eine Ausnahme bildet die zylindrische Ver- knupfung (Frank, 1961).

Binare Eigenschaften werden den kontradiktorischen Doppelspalten in der Weise eingegeben, dass bei vorhandener Eigenschaft (ejv = 1) ein elektrisches Signal etwa der linken Doppelspalte, bei fehlender Eigenschaft (ejv = 0) der rechten Doppelspalte zugefuhrt wird. Diese Anordnung ist erforderlich, um auch die in der Aussage, eine Eigenschaft sei nicht vorhanden, enthaltene Information auswerten zu konnen, um also zu verhindern, dass etwa der Eigenschaftssatz { e } = 11 11 . . . 1 I alle ubrigen moglichen Eigenschaftssatze uberdeckt.

Auch bei der Verwendung nichtbinarer Signale als Eigenschaften eines Eigenschafts- satzes ist die Anwendung von Doppelspalten sinnvoll zur Darstellung negativer Eigen- schaftswerte, wobei ein den positiven Eigenschaftswert reprasentierendes Signal etwa

D g e n s c h a f fssotz

I e l

I kon trod/kforische

Doppeispal te

Fig. 2. Lernrnatrix.

Page 112: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N 103

der linken, negative Eigenschaftswerte darstellende Signale der rechten Doppelspalte zugefuhrt werden.

In Fig. 2 werden die vorstehend erlauterten Grundbegriffe zusammenfassend dar- gestellt .

4. K L A S S E N V O N L E R N M A T R I Z E N

Indem uber die zwei Freiheitsgrade der in Abschnitt 3.2 angegebenen Beziehung A 0iv = F {ejv; x iv (e iv ) } ; i = 1,2, . . . m

verfugt wird, ergeben sich Typen von Lernmatrizen kennzeichnenden Verhaltens. Grundsatzlich lassen sich, bestimmt durch die Darstellungsart der Eigenschaften, zwei Klassen von Lernmatrizen angeben. Es besteht die Unterscheidung zwischen Lern- matrizen fur binare Signale und solchen fur nichtbinare Signale, worauf im Abschnitt 3.1 bereits hingewiesen wurde.

Zu den bisher untersuchten Lernmatrizen (Steinbuch, 1961a ; Steinbuch und Frank, 1961) gelangt man, indem zunachst in der Lernphase die automatische Einstellung der Verknupfungselemente einer Zeile in der Weise erfolgt, dass die Verknupfungen xtv proportional den Signalen eingestellt werden, die, in die Spalten der Matrixschaltung eingegeben, den zu lernenden Eigenschaftssatz verkorpern. Es gilt also :

x i v = k . eiv

Aufgabe einer technischen Verwirklichung der Lernmatrix ist es, diese Beziehung zu realisieren.

Des weiteren wird fur die Kannphase ein multiplikativer Zusammenhang zwischen der angebotenen Eigenschaft ecj und der Verknupfung xcv gewahlt, etwa in der Form:

A = ejV . x i v (elv)

Die bisher technisch verwirklichten Verknupfungselemente folgen dieser Beziehung.

4.1. Lernmatrix f u r binare Signale Die Wirkungsweise der Lernmatrix fur binare Signale (auch ‘binare Lernmatrix’

genannt) sei an Hand einer in der Kannphase betriebenen Bedeutungsmatrix in Fig. 3 beschrieben. Als Bedeutungsmatrix wird eine solche Matrixschaltung bezeichnet, die fur die Durchfuhrung einer Lernphase nicht ausgerustet ist, in der also die Einstellung der Verknupfungselemente beispielsweise von Hand erfolgt und die lediglich den Betrieb in der Kannphase gestattet. So wurde im Beispiel der Fig. 3 ein Repertoire von vier binaren Eigenschaftssatzen { e } l . . . . {e}4 in den vier Zeilen der Matrix- schaltung durch entsprechende Einstellung der Verknupfungselemente dieser Zeilen eingestellt. Die gelernten Eigenschaftssatze lassen sich durch Codeworter beschreiben : { e } l = 000001, {e}z = O I O I I 1 , {e}3 = 101011, (e}2 = 111101.

Die Verknupfungselemente werden durch Leitwerte gebildet, wobei der Verknup- fung die Grosse G der Leitwerte entspricht. Die Einstellung der Verknupfungselemente erfolgte gemass der Beziehung xiv = k - etv fur die linke, xtv’ = k * ;av fur die rechte Doppelspalte, mit k = G, wobei nun die Eigenschaften binare Grossen sind. Die Liferafur S. 113

Page 113: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

104 P. M U L L E R

I f : 3 . U . G I, ~ 2 . 0 . 6

I z = b ' U ' G I , = 3 . U . G

Fig. 3. Prinzip der binaren Bedeutungsmatrix (Steinbuch, 1961a).

Verknupfungen xiv bzw. xiv' der Verknupfungselemente im Kreuzungspunkt der i-ten Zeile mit der v-ten Doppelspalte nehmen den Wert G fur ecv = 1 bzw. eta = 1 an, andernfalls sind sie gleich Null. Im Abschnitt 6 wird angegeben, wie die in dieser Weise erfolgende Einstellung der Verknupfungselemente technisch realisiert werden kann.

Der angebotene Eigenschaftssatz {e}j - in der Fig. 3 durch 0101 11 beschreibbar - wird durch eine Gleichspannung U dargestellt, die jeweils an die linke Spalte gelegt wird, wenn die korrespondierende Eigenschaft des angebotenen Eigenschaftssatzes den binaren Wert 1 hat, andernfalls mit der rechten Doppelspalte verbunden wird. Die verknupfende Wirkung der Leitwerte in den Kreuzungspunkten der Zeilen und Spalten besteht darin, dass durch eine an die Zeilen und Spalten angelegte Spannung in den Zeilen und Spalten definierte Strome fliessen, welche die Zeilenerregung dar- stellen. Hier gilt wieder der multiplikative Zusammenhang zwischen der Verknupfung und der angebotenen Eigenschaft in der Weise, dass der Erregungsbeitrag A 8.1, das Produkt aus der anliegenden Spannung U und dem verknupfenden Leitwert G ist :

-

A ei, = G.U

Im vorliegenden Beispiel ist die Erregung ein Strom. In derjenigen Zeile wird der maximale Strom fliessen, welche die grosste Anzahl von, dem angebotenen Eigen- schaftssatz korrespondierenden, auf die Verknupfung G eingestellten Verkniipfungs- elemente aufweist. Die Gesamterregung erreicht ein absolutes Maximum fur diejenige Zeile, deren durch die Anordnung der Leitwerte gespeicherter Eigenschaftssatz mit dem angebotenen Eigenschaftssatz identisch ist.

In Fig. 3 sind die Grossen der tatsachlich fliessenden Strome angegeben; Zeile 2 wird maximal erregt, der in der zweiten Zeile gelernte Eigenschaftssatz stimmt mit

Page 114: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N 105

dem angebotenen Eigenschaftssatz in samtlichen Eigenschaften uberein. Technische Ausfuhrungsformen von Extremwertbestimmungsschaltungen werden im Abschnitt 6 angegeben.

Bei den bisher beschriebenen binaren Lernmatrizen wurde unterstellt, dass die Bildung der bedingten Verknupfungen in einem einzigen Lernschritt erfolgt, dass also die Verknupfung unmittelbar den durch xiv = k * ecv gegebenen Wert annimmt, wobei keine Konstante ist.

Diesem, in einem Schritt erfolgenden Lernvorgang steht der schrittweise Lernvor- gang gegenuber, der sich dadurch auszeichnet, dass die Verknupfungen sukzessive, in aufeinanderfolgenden Lernschritten, den durch xtv = k - eav gegebenen Wert erreichen in Abhangigkeit davon, inwieweit der anfanglich angebotene Eigenschaftssatz bei wiederholter Eingabe wahrend der Lernphase bestatigt wird, d.h. sich nicht andert. Hierbei konnen wahrend des Lernvorganges statistische Gesetzmassigkeiten in Form eines, jeder Verknupfung gegebenen, Gewichtsfaktors berucksichtigt werden. Ebenso wird es moglich, bereits gelernte Eigenschaftssatze in der Weise ‘umzulernen’, dass die, die Verknupfung bewertenden Gewichtsfaktoren entsprechend den sich in wenigen Eigenschaften andernden Eigenschaftssatzen ebenfalls geandert werden. Formal lasst sich dieser Sachverhalt dadurch darstellen, dass in xiv = k * erv der Faktor k den Gewichtsfaktor ausdriickt.

4.2. Lernmatrix fur nichtbinare Signale Die abkurzend als ‘nichtbinare Lernmatrix’ bezeichnete Lernmatrix fur nichtbinare

Signale ist in der Lage, Eigenschaftssatze zu verarbeiten, deren Eigenschaften kon- tinuierliche Eigenschaftswerte annehmen. Hinsichtlich der Kannphase besteht zur vorstehend behandelten binaren Lernmatrix ein Unterschied darin, dass die Verknup- fungen nicht gleiche Werte annehmen (wenn man den Fall des schrittweisen Lernens im binaren Falle zunachst nicht berucksichtigt), sondern den zu lernenden Eigen- schaftswerten proportional sind. Bei der binaren Lernmatrix ist (bei Berucksichtigung

Fig. 4. Prinzip der nichtbinaren Bedeutungsrnatrix (Steinbuch und Frank, 1961).

Literafur S. 113

Page 115: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

106 P. M U L L E R

des schrittweisen Lernens) die Verknupfung also ein Mass fur eine Haufigkeit, bei der nichtbinaren Lernmatrix indessen ein Mass fur eine Eigenschaft. Wollte man auch fur die nichtbinare Lernmatrix einen statistischen Gewichtsfaktor einfuhren, so ist es erforderlich, zwei voneinander abhangige Grossen eines Verknupfungselementes un- abhangig voneinander einzustellen.

Der Betrieb in der Kannphase verlauft in gleicher Weise wie bei der binaren Lern- matrix. In Fig. 4 ist eine nichtbinare Bedeutungsmatrix gezeigt, die - zwecks einfacher Erklarung - zunachst keine Doppelspalten aufweist. (Dadurch wird diese Matrix- schaltung auf positive Eigenschaftswerte beschrankt.) Wiederum werden die Verknup- fungselemente durch Leitwerte dargestellt, deren Grosse proportional den Eigen- schaftswerten der in den einzelnen Zeilen gelernten Eigenschaftssatze ist, wobei die Eigenschaftssatze in geeigneter Weise ‘normiert’ sind (siehe weiter unten). Ein Eigen- schaftssatz wird durch einen Satz von seinen Eigenschaften proportionalen Spannun- gen dargestellt, die an die Spalten der Anordnung gelegt werden. Gegeben durch die Grosse der Leitwerte und Spannungen fliessen in den Zeilen Strome, welche die Gesamterregungen der Zeilen bilden und als Summe der Erregungsbeitrage der Ver- knupfungselemente der i-ten Zeile berechenbar sind.

Diejenige Zeile wird maximalen Strom fuhren, deren Verknupfungselemente (im Beispiel der Fig. 4 deren Leitwerte) einen Eigenschaftssatz reprasentieren, der mit dem angebotenen Eigenschaftssatz identisch oder ihn am ahnlichsten ist. Aufgabe der bereits beschriebenen Extremwertbestimmung ist es, diese Zeile maximalen Stromes anzuzeigen.

Diese Aussage ist nur unter der Voraussetzung richtig, dass die gelernten Eigen- schaftssatze ‘normiert’ worden sind. Im folgenden sol1 gezeigt werden, in welcher Weise die Normierung wahrend der Lernphase zu erfolgen hat.

Die Gesamterregung, d.h. der Strom 15 der i-ten Zeile, in die ein Eigenschaftssatz {e}r gelernt wurde, ist, wenn ein Eigenschaftssatz { e } j angeboten wird:

und wenn die Leitwerte gg, als Verknupfung gemass xt , = k * erv eingestellt worden sind. Dieser Ausdruck ist dem Skalarprodukt zweier Vektoren ej und gg mit den Komponenten ej, und gc, aquivalent, so dass sich auch schreiben lasst :

l i = lei1 . lgtl . cos qpij ; i = 1,2, . . . rn

Da eingangs (im Abschnitt 3.1) gezeigt wurde, dass die Eigenschaftssatze als Vektoren interpretierbar sind, und da weiterhin auch die Leitwerte grv als Komponenten eines gleichartigen Vektors aufgefasst werden konnen, ygj also den von diesen beiden Vek- toren eingeschlossenen Winkel bildet, liegt es nahe, den Winkel ‘pt j als Ahnlichkeits- kriterium fur den Vergleich zwischen angebotenen und gelernten Eigenschaftssatzen zu wahlen. Damit in obigem Ausdruck fur die Zeilenerregung lediglich der von beiden Vektoren eingeschlossene Winkel qtj als Variable auftritt, ist es notwendig, fur samt-

Page 116: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N 107

liche Zeilen die Einhaltung der Bedingung gi = K zu fordern. Daraus ergibt sich als Normierungsbedingung fiir die zu lernenden Eigenschaftssatze :

Bei nichtbinaren Lernmatrizen wird der Winkel zwischen den die angebotenen und gelernten Eigenschaftssatze darstellenden Vektoren als Ahnlichkeitskriterium heran- gezogen. Wenn der eingeschlossene Winkel zu Null wird, sind beide Eigenschaftssatze identisch. Die Extremwertbestimmungsschaltung zeigt jene Zeile an, deren gelernter Eigenschaftssatz mit dem angebotenen Eigenschaftssatz - in der vektoriellen Dar- stellung - den kleinsten Winkel einschliesst.

Aus der Tatsache, dass der eingeschlossene Winkel als Ahnlichkeitskriterium Ver- wendung findet, Iasst sich eine typische Eigenschaft der nichtbinaren Lernmatrix ableiten, namlich die Invarianz der Erkennung von Eigenschaftssatzen gegen affine Transformationen der angebotenen Eigenschaftssatze. Affine Transformation eines Eigenschaftssatzes bedeutet eine Multiplikation samtlicher Eigenschaften des Eigen- schaftssatzes mit einer positiven reellen Zahl. Die Richtung des Eigenschaftsvektors wird durch die affine Transformation des Eigenschaftssatzes nicht beeinflusst und damit auch nicht die Erkennung, da ihr lediglich die Richtung des Eigenschaftsvektors zugrunde liegt.

Scherung Kombinierte Transformation

Fig. 5. Mogliche Invariantenbildungen (Steinbuch und Frank, 1961).

In Fig. 5 sind einige Transformationen eines Kurvenzuges, etwa einer Zeitfunktion, dargestellt. Wird anstelle der Eigenschaften der Originalfunktion ein Eigenschaftssatz angeboten, welcher den Differenzenquotienten der Funktion entspricht, so ergeben Literatur S. 113

Page 117: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

108 P. M U L L E H

sich Invarianten gegen Translation. Werden die zweiten Differenzenquotienten ver- wendet, so ergibt sich Invarianz gegen Scherung.

Treten innerhalb eines Eigenschaftssatzes auch negative Eigenschaftswerte auf, so wird eine Anordnung mit Doppelspalten verwendet (siehe Abschnitt 3.4). Eine derarti- ge Matrixschaltung ist in Fig. 6 gezeigt.

Fig. 6. Nichtbinare Lernmatrix fur negative Eigenschaften (Steinbuch und Frank, 1961).

5. E I G E N S C H A F T E N V O N L E R N M A T R I Z E N

Im Abschnitt 2 wurden bereits zwei grundlegende Funktionsmoglichkeiten von Lern- matrizen angefuhrt, namlich der {e} 4-Betr ieb und der b +{e)-Betrieb in der Kannphase (siehe Fig. 1). Im {e}+b-Betrieb erfolgt auf die Eingabe eines Eigen- schaftssatzes die Angabe der Bedeutung des diesem Eigenschaftssatz ahnlichsten ge- lernten Eigenschaftssatzes, wobei die exakte Bedeutung angegeben wird, wenn der angebotene Eigenschaftssatz zum Repertoire der gelernten Eigenschaftssatze gehort. Im entgegengerichteten Betrieb, dem b +{e}-Betrieb, bewirkt ein geeignetes, einer Zeile zugefiihrtes Signal, welches eine bestimmte Bedeutung reprasentiert, die Ausgabe des eben in dieser Zeile zuordnend gelernten Eigenschaftssatzes.

In der Fig. 7 sind zwei weitere Anordnungen von zwei Lernmatrizen gezeigt, die als Lernmatrix-Dipole bezeichnet werden. Es konnen entweder die Zeilen (ebe-Kopplung) oder aber die Spalten beider Lernmatrizen miteinander gekoppelt sein (beb-Kopplung).

In der ebe-Kopplung wird ein Eigenschaftssatz {e} a in einen zweiten Eigenschafts- satz (e}p der gleichen Bedeutung wie {e}, transformiert, wobei die Bedeutung das

Page 118: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N

--- kja

Lernrnoiriw a

109

Fig. 7. LM-Dipole (Steinbuch, 1961b).

verbindende Element ist. Durch diese Kopplung liesse sich z.B. der Vorgang der Sprachukersetzung nachbilden. Die Eigenschaftssatze {el , und {e} sind Worter der Objektsprachen a und /3, wahrend b ihre gemeinsame 'Bedeutung' ist.

In der beb-Kopplung erfolgt die Transformation einer Bedeutung b , in eine andere Bedeltung b,, wobei der gemeinsame Eigenschaftssatz { e } das Verbindungselement ist. Diese Art der Zusammenschaltung kann als Modell beispielsweise fur die zwischen- menschliche Kommunikation dienen, wo namlich vereinbarte Signale ubertragen werden, die fur Expedient und Perzipient eine vereinbarte Bedeutung haben.

ebe- Kopplung

c 4'e i'e b- Koppl u;)g

Fig. 8. Schichtung von Lernmatrizen (nach Steinbuch, 1961 b).

Eine dritte Moglichkeit der Zusammenschaltung von zwei Lernmatrizen wird durch die Schichtung gegeben (Fig. S), bei der, durch eine geeignete Zwischenschaltung, die sequentiell der ersten Lernmatrix entnehmbaren Bedeutungen als Eigenschaftssatz fur Literutur S. 113

Page 119: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

110 P. M U L L E R

die zweite Lernmatrix dienen. Geschichtete Lernmatrizen konnen als Modell fur Er- kennungsprozesse dienen, etwa fur die Erkennung von Wortern aus einzelnen Buch- staben, wobei der ersten Lernmatrix die Schwarzungsverteilung der Buchstaben als Eigenschaftssatz { e } a zugefuhrt wird und die daraufhin angegebene Bedeutung b a

diesen Buchstaben bezeichnet. Mehrere Buchstaben bilden einen Eigenschaftssatz { e } B fur die zweite Lernmatrix, die nun die Worter b , bezeichnet. Diese Struktur lasst sich erweitern ; sie erlaubt es, die Fahigkeit organischer Systeme nachzubilden, welche die Bedeutung gestorter Codeworter (falsch geschriebene Worter) zu rekonstruieren ver- mogen.

6. R E A L I S I E R U N G V O N L E R N M A T R I Z E N

Bei der Realisierung von Lernmatrizen lassen sich drei Problemkreise erkennen, nam- lich erstens die Schaffung geeigneter Verknupfungselemente, zweitens die Entwicklung geeigneter Verfahren zur automatischen Einstellung der Verkniipfungselemente wahrend der Lernphase, und drittens die technische Durchfuhrung der Kannphase einschliesslich der Extremwertbestimmung.

Fur die Verkniipfungselemente besteht zunachst die Forderung nach automatischer Einstellbarkeit, z.B. proportional den Eigenschaften des zu lernenden Eigenschafts- satzes unter Berucksichtigung eines Gewichtsfaktors (siehe Abschnitt 4. I ) . Meist sol1 die eingestellte Verkniipfung reversibel sein und bei Unterbrechung der Energiezufuhr erhalten bleiben.

Von den bekannten Schaltelementen scheinen ferromagnetische Elemente wie Ring- kerne oder Transfluxoren, deren Induktion die Verkniipfung bestimmen, die zweck- massigsten Verkniipfungselemente zu sein. In Sonderfallen lassen sich auch Konden- satoranordnungen (Aufdampfkondensatoren oder Metallpapierkondensatoren) sowie elektrochemische Zellen, deren Leitwerte als Verkniipfung Anwendung finden, ver- wenden (Honerloh und Kraft, 1961).

In der Lernphase wird bei Ferromagnetika eine Induktionsanderung hervorgerufen, die auch ohne Energiezufuhr erhalten bleibt. Die Induktionsanderung ist reversibel. In elektrocheniischen Zellen werden durch Ionenreaktionen Leitwertsanderungen er- zeugt, die allerdings im allgemeinen irreversibel sind. Bei Kondensatoranordnungen erfolgt eine Kapazitatsanderung durch Ausbrennen der Kondensatorbelegungen, ein ebenfalls irreversibler Vorgang.

In der Kannphase schliesslich besteht die Aufgabe, durch ein geeignetes Verfahren als Zeilenerregung ein Signal zu gewinnen, welches der Summe der Produkte aus den Verkniipfungen einer Zeile und dem angebotenen Eigenschaftssatz proportional ist. Diese Forderung wird dadurch erfiillt, dass der Zustand der Verkniipfungselemente (die Verkniipfung) durch den angebotenen Eigenschaftssatz selbst abgefragt und als Erregung ein Signal ermittelt wird, welches die Eigenart besitzt, der Summe der Produkte aus Verkniipfungen einer Zeile und Eigenschaften des angebotenen Eigen- schaftssatzes proportional zu sein. Dabei besteht die zusatzliche Forderung die Ver- kniipfungselemente so abzufragen, dass die Verkniipfung davon nicht bleibend be- einflusst wird.

Page 120: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

LERNMATRIZEN 111

Fig. 9 gibt eine Ubersicht uber die einzelnen Realisierungsmoglichkeiten. In technisch ausgefuhrten Lernmatrizen wird die Extremwertbestimmung durch

elektronische Schaltungen unter Verwendung von Halbleiterdioden und Transistoren durchgefuhrt. Allen Extremwertbestimmungsschaltungen gemeinsam sind ein nicht- lineares Vergleichselement, in dem die einzelnen Zeilenerregungen mit einer Bezugs- grosse verglichen werden, und ein Indikator zur Anzeige der extremal erregten Zeile.

Ferromagnetika Aufdampf- k o n d e n - Metall- Elektrochemische Zellen

Ringkern satoren papier Ta-Ag- Folienkern Ag-SiOz-Ag Ag-AgBr-Ag Elektrolyt

Ag-Ionen- Auflosen wanderung, von Ag,

Vorgang Stromansteuerung Ausbrennen der Belage Bildung von Bildung von 4 Ag-Faden TazOs

Kapazitatsanderung .1 4 in der 4

Lernphase Induktionsanderung Leitwert- Leitwert- erhohung verringerung

Bedingte Verknupfungen reversi be1 irreversibel irreversibel irreversibel (reversibel)

Ansteuerung HF-Strom

Kannphase impulse HF-Spannung oder Bipolares Spannungsirnpulse Impulspaar in der HF-Strom oder Strom-

Betraa der I

Strom- Spannungs- amplitude amplitude

Messgrosse Oberwellen- Betrag der

Erregung (bestimmter Spannung spannung induzierten Spannungsaniplitude fur die

Phase) Fig. 9. Realisierungsmoglichkeiten der Lernmatrix (Honerloh und Kraft, 1961).

Es sei an dieser Stelle auch darauf hingewiesen, dass Strukturen wie Lernmatrizen leicht mit Neuronenmodellen aufgebaut werden konnen. Die Extremwertbestimmung kann so realisiert werden, dass die extremal erregte Zeile, jeweils durch ein Neuron dargestellt, hemmend auf alle ubrigen Zeilen wirkt.

7. A N W E N D U N G VON LERNMATRIZEN

Abschliessend sei eine ubersicht iiber einige mogliche Anwendungen von Lernmatrizen gegeben.

Bei der automatischen Zeichen- und Spracherkennung sind manchmal die zu er- kennenden optischen oder akustischen Merkrnale nicht vollstandig bekannt, so dass sich von vornherein keine Schaltungsstrukturen angeben lassen, welche die Erkennung vollstandig durchzufiihren vermogen. Eine Lernmatrix kann sich jedoch den Merk- malen anpassen und den Wechsel der Schrifttypen, vielleicht sogar handschriftliche Zeichen, zulassen. Ebenso sind mit Lernmatrizen Anpassungsvorgange an individuelle Eigenschaften der Sprecher moglich.

In der Sprachiibersetzung lassen sich Lernmatrizen vorteilhaft zur Anpassung des internen Vokabulars an bestimmte Texte einsetzen. Literatur S. 113

Page 121: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

112 P. M U L L E R

Fur die Dokumentationstechnik ergeben sich neuartige Verhaltnisse durch die Tat- sache, dass der Extremwertbestimmung einzig die aufzufindende oder zu klassifizieren- de Information zugrunde liegt, also kein Adressensystem erfordert wird. Ebenso er- geben sich fur die Wettervorhersage, fur die medizinische Diagnose, fur die Verkehrs- regelung und fur die automatische Regelung und Steuerung weite Anwendungsbereiche (Gorke et a]., 1961).

Z U S A M M E N F A S S U N G

Die Lernmatrix ist eine matrixformige Schaltungsstruktur in der eine grosse Anzahl von bedingten Verkniipfungen so zusammengefasst wird, dass einfache Lernvorgange zustande gebracht werden konnen.

Im Allgemeinen findet in einer Lernmatrix eine angelernte Zuordnung eines Eigen- schaftssatzes (eines Musters) und einer Bedeutung statt. Beide sind durch Druck- schalter oder ahnlicherweise in elektrische Signale umgesetzt. Diese Zuordnung kommt automatisch zustande wahrend der Lernphase der Lernmatrix. Nach Abschluss dieser Phase folgt die Kannphase, d.h. die Betriebsphase, worin die gelernte Bedeutung eines Eigenschaftssatzes (des Musters) welche der Schaltung der Matrixspalten fruher eingegeben wurde, durch eine Extremwertbestimmungs-Vorrichtung, ebenfalls in Form eines elektrischen Signals wiedergegeben wird.

Die Grundbegriffe der Lernmatrizen werden besprochen, insbesondere der Eigen- schaftssatz (das Muster) welcher der Matrixschaltung angeboten wird, die Verkniip- fungselemente bei der Kreuzung der Zeilen und Spalten, die durch diese zustaqdc- gebrachte Erregung der einzelnen Zeilen bei Zufuhrung eines Musters an die Matrix, und schliesslich die Detektion und Nachweis der meist erregten Zeile. Danach werdcn die verschiedenen Klassen der Lernmatrizen behandelt. Man unterscheidet zwei Hauptgruppen, u. zw. binare Matrizen, und eine besondere nichtbinare Type, namlich die Perzeptoren-Type.

Eine Beschreibung der verschiedenen Funktionen der binaren und perzeptiven Lernmatrizen weist auf die Moglichkeit der annahernden Nachbildung einzelner Eigenschaften des Nervensystems und des Gehirns mittels Lernmatrizen. Im Wesen bestehen zwei Betriebsmethoden, welche entgegengerichtet sind. Zwei Lernmatrizen konnen entweder parallel, oder in Reihe gekoppelt werden. Parallelkopplung ergibt ein (subjektives oder objektives) Lernmatrix Antennepaar, wahrend durch Reihen- kopplung eine geschichtete Lernmatrix entsteht, welche als Modell fur bestimmte Erkennungsprozesse im Gehirn dienen kann. Auch die perzeptive Lernmatrix bildet einige Invarianz bei der Erkennung analog dargestellter Muster. Ein einfaches Modell dieser Type wird beschrieben.

Zum Schluss werden die Grundprinzipien und technische Methoden der Kon- struierung von Lernmatrizen besprochen.

S U M M A R Y

C L A S S E S A N D P R O P E R T I E S O F L E A R N I N G M A T R I C E S

Learning matrices are structures of matrix form which combine a large number of

Page 122: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E R N M A T R I Z E N 113

conditioned connections in such a way that relatively simple learning processes can be effected.

Generally, in a learning matrix a learned coordination of a set of features (a pattern) digitally or analogously represented by electrical signals, and a meaning being likewise an electrical signal, takes place. This coordination is automatically effected during the Iearningphase of the learning matrix. When the learning phase is completed, the learning matrix is operated in the knowing phase, during which the previously learned meaning of a particular set of features (the pattern) presented to the columns of the matrix circuit is indicated by means of a maximum detection device, also in the form of an electrical signal.

First, some basic concepts related to learning matrices are discussed, especially the set of features (the pattern) fed into the matrix device, the connecting elements arrang- ed at the intersections of rows and columns, the excitations of the different rows caused by the connecting elements when a set of features is presented to the matrix, and finally the detection and indication of the maximum excited row are described. Then a classification of learning matrices based on these concepts is put forward, and two important classes, a binary and a special type of non-binary learning matrix, the perceptive learning matrix, are discussed.

A description of the various functions of binary and perceptive learning matrices shows the possibility of approximating some properties of the nervous system and the brain by means of learning matrices. There are two fundamental modes of oper- ation, one being the inverse of the other. Two learning matrices can be connected either in parallel or in series, in the first case resulting in the (subjective or objective) learning matrix dipole, in the latter case leading to layers of learning matrices which could serve as an analogue to certain recognition processes taking place in the brain. Likewise the perceptive learning matrix forms some invariants when recognizing analogously represented patterns. A simple model of this type of learning matrix will be shown.

Finally, some principles and modes of technical construction of learning matrices are indicated.

L I T E R A T U R

FRANK, H., (1961); Die Lernmatrix als Model1 fur Informationspsychologie und Semantik. Lernende Automaten. Beiheft zu Elektronische Rechenanlagen, 3, 101 -108.

GORKE, W., KAZMIERCZAK, H., UND WAGNER, S. W., (1961); Anwendungen der Lernmatrix. Lernende Automaten. Beiheft zu Elektronische Rechenanlagen, 3, 84-100.

HONERLOH, H. J., UND KRAFT, H., (1961); Technische Verwirklichung der Lernmatrix. Lernende Automaten. Beiheft zu Elektronische Rechenanlagen, 3, 75-83.

PISKE, U., (1962) ; Lernende Automaten. Realisierungs- und Anwendungsmoglichkeiten lernender Auto- maten unter besonderer Beriicksichtigung der Lernmatrizen nach K . Steinbuch. Vortrag Kybernetische Tagung der Akademie der Wissenschaften Berlin, 20.-23. Marz 1962.

STEINBUCH, K., (1961a); Die Lernrnatrix. Kybernetik, 1, 364.5. STEINBUCH, K., (1961 b); Schaltungen rnit der Lernmatrix. Lernende Automaten. Beiheft zu Elektroni-

STEINBUCH, K., UND FRANK, H., (1961); Nichtdigitale Lernmatrizen als Perzeptoren. Kybernetik, 1,

ZEMANEK, H., (1 961); Logische Beschreibung von Lernvorgangen. Lernende Automaten. Beiheft zu

sche Rechenanlagen, 3, 63-68.

117-124.

Elektronische Rechenanlagen, 3, 9-25.

Page 123: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

114

Sensory Homeostasis

G E O R G E W. ZOPF, Jr.

Electrical Engineering Department, University of Illinois, Uubana, Ill. (U.S.A.)

We have known since our first flush of self-consciousness that the vision in the mind’s eye is at variance with the objective world of the Scientist. Most often we have treated this discrepancy as defect; it is only recently that we have begun to suspect that we perceive only that which Adelbert Ames called ‘the mind’s best guess as to what is out there’. Since Plato’s cave, St. Paul’s dark glass, and Bacon’s idols, we have seen our vision as defective. But perhaps what we have castigated and deplored as defection is no more than our new friend, selection. Perhaps it is not too soon to examine the mechanisms, and even the purpose, of that selection.

I think it likely that the very objectivity of science, particularly that of its attitudinal subdivision of cybernetics, is now primed to treat the subjectivity of perception. We have been able to see much of the world as mechanism, and even to construe ourselves as machines. Perhaps this qualifies us to begin the study of the selfish machine, whose action is as much referred to its own status as it is to the variety of the world it finds itself in. Perhaps we can now show that we cannot expect sensory objectivity on the part of complex organisms living and surviving in a complex environment. The con- summate art of regulation implies selection - reduction of variety - and this is tantamount to a kind of wilful blindness, an inability to attend to anything impertinent to the task of keeping a set of private and essential variables within bounds.

The sensory eidolon of the living, surviving organism is not the scientist who de- scribes an apple as a smooth, round, red, fruit, but rather the child who claims that an apple is to eat. We do not so much ‘perceive’ as we ‘perceive as’. The ontic elements of our world are not couched in terms of an exterior logic, but of an interior strategy; not in terms of truth, but of advantage.

It follows that we shall not understand the complexities of perception, nor its part in the integrated life of the organism, so long as we can see it only as a strict and tidy elaboration of homomorphs of exterior events. The environment of the brain includes the body, with its own private insistences; to deny these in the interests of a fair representation of the exterior world is to abandon the core of regulation. If we misperceive the exterior world, we are said to be mad; if we disregard the interior world, we die.

Not so very long ago it was thought that the task of a sensory system was not more than the projection of an encoded isomorph of stimulus onto the analyzing and

Page 124: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S E N S O R Y HOMEOSTASIS 115

integrating screen of the cortex. Since then we have demoted the cortex and promoted the sensory system, allowing it to have a larger measure of analysis, so that stimulus data arrive centrally in full assortment, their features translated into terms of magni- tude, place, and time.

Even more recently, we have emphasized the information-reducing capabilities of the sensory systems, maintaining that they respond not to every particularity of each stimulus, but only to certain properties held in common by many stimuli, so that each stimulus event is not reported as simply truths of intensity at particular places and times, but as affirmations and denials of complex properties, sophisticated reports of configurations and sequences that approach, in their elaboration, proto-percepts rather than crude sensations.

Thus it appears that the sensory systems play at least passive roles in decision. Through evolutionary selection of connection and unit function, they have become filters for the significant, passing only those patternings of stimulus that have been found to be semantically weighty in a world with constraints. So long as the world maintains crannies of constancy, such a fixed selection of significance will suffice for the creatures suited to the crannies.

Some of the details of this sensory pre-processing have been studied for vision (Lettvin et al., 1959; Hubel and Wiesel, 1959), and machine technologists have at- tempted exploitation. In their hands the extraction of portentous invariances, or significant properties, has been identified solely with operations on objective features of the stimulus patterns. While this may suffice for the frog-like tasks we require of our machines, there is serious question whether this treatment is wholly appropriate to the organism whose world comes at him from all sides, including the inside.

It is now clear that the nervous system not only has the means of controlling the flux of sensory data, but does indeed exert that control. Moreover, there are indications that the control is highly selective, and not a mere augmentation or attenuation of flux in whole sensory pathways. But whatever the magnitude and quality of this retrograde control, theories of sensory processing and perception must take into account the presence of discriminatory feedback. Nor can it be dismissed as a laggard development; the insect (Autrum, 1958) and the mollusc (Young, 1962) also show it.

We have long had anatomical testimony to the existence of fibers running counter- current to the afferent sensory pathways, running even to the junction of receptor and primary sensory neuron. Study has shown that few, if any, of the points of synapsis on the afferent paths escape the intrusion of efferent fibers. Despite continuing ambi- guities of source, course, and termination, we can take the structural provenance of efferent sensory control as well established.

Recent years have witnessed extensive physiological corroboration of these retro- grade paths. We can now assert with confidence the functional patency of efferent control. Since there have been several surveys of central control (Brazier, 1958; Livingston, 1958, 1959) I will give you no broad picture here, but shall emphasize thoserfeatures that comport with perception as a dynamism, related and responsive not only to the objective peculiarities of the stimulus but also to the internal state variables linked to experience and intention, genetic or learned. References p . 119/120

Page 125: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

116 G E O R G E W. Z O P F JR.

As anyone familiar with the current fads of neurophysiology might suspect, the diencephalic-mesencephalic reticular formation is strongly implicated in sensory control (Hernhndez-Peh, 1955, 1961 ; Lindsley, 1958). We know that the afferent sensory systems, in addition to their classical courses, also run collaterally through the reticular formation. Nor is this course necessarily mediate. It has been found (Galambos et al., 1961) that transection of the classical auditory path in cats did not abolish cortical response to auditory stimulation, and that the response is abolished after transection only by chemical agents known to act selectively on the reticular formation. It is to be noted that this extralemniscal path shows no marked increase in latency over that of the lemniscal path; there is no creeping of signal through multisynaptic swamps.

These findings are presumptive evidence of a dual corticopetal path, one classically lemniscal, the other, though reticular in course, appearing to show a modal specificity that does not comport with the diffuse projection of the reticular formation on the cortex.

If the afferent path is dual, we may, by an appeal to symmetry, also postulate duality for the efferent path. There is indeed some factual support for such a division. When the olivo-cochlear tract was stimulated in anesthetized cats with middle ear muscles cut or paralyzed it was found (Galambos, 1956) that auditory nerve activity was suppressed. Later attempts by the same author (Galambos, 1960) to show the same effect in unanesthetized cats with the middle ear muscles intact were unsuccessful, although other workers (Gershuni et al., 1960) claim the effect. It has been noted (Galambos, 1960; Galambos and Rupert, 1959) that the middle ear muscles in the unanesthetized cat can effect considerable attenuation of auditory nerve response, and that this action is sporadic and continuing, and does not relate simply to a ‘protective’ response to loud sounds.

Studies on the visual system (Fernhndez-Guardiola et al., 1961) show suggestive parallels. When pupillary mobility in cats is abolished by atropine, the characteristic ‘habituation’ of the photic response at the optic chiasm (Hernhndez-Pe6n et al., 1956) was reduced or abolished.

These auditory and visual studies suggest dual termination of efferent paths: an ‘outer’ termination at the effector junctions of the motor apparati inserted between stimulus and receptor, and an ‘inner’ termination at the receptor and synapticjunctions of the afferent neuronal system.

From an identification of dual termination, it is tempting to argue for a dual efferent path. Since the two sorts of termination differ markedly in what we may expect of their effect on sensory input, it is at least plausible to suppose that they are mediated by functionally distinct central mechanisms. The outer terminus is ideally suited for gross modulation of input in a given sensory system; the inner terminations, playing on all or most of the synapses of the afferent path, including the receptor junctions, appear suited for selectivity within each modality.

But we need not depend wholly on plausibility. It has been found-(Desmedt and Mechelse, 1958) that stimulation of diencephalic loci distinct from the reticular com- plex could reduce the auditory nerve response. Similarly, there is evidence of an

Page 126: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S E N S O R Y HOMEOSTASIS 117

efferent extrareticular system affecting response in the afferent somesthetic system (Chambers et al., 1960). These results suggest that the reticular formation is not in complete charge of all sensory commerce, and indeed, plays a coarse role in efferent influence.

Hence we have support (Desmedt, 1960; Sprague et al., 1960) for the existence of an extrareticular efferent system showing high discriminability. The implication of the reticular formation in the phenomena of ‘attention’, ‘habituation’, and conditioning, as read within the nervous system (Hernandez-Peh, 1955, 1960, 1961) is not gainsaid by the introduction of a new efferent system. If the analogy is permitted, the reticular system acts protopathically, and the ‘paralemniscal’ system acts epicritically, in their control of sensory input.

In sum, I am suggesting that there are two efferent systems playing upon the in- coming sensory channels. The reticular efferent system, acting terminally in the motor apparati accessory to each sensory system, exerts a gross tonic and phasic control, modulating the total sensory flux and selecting among the various sensory systems, with a strictly limited capability of selection within a modality. The paralemniscal efferent system, on the other hand, acting primarily on receptor junctions and synapses of the afferent systems, is charged with discriminative intramodal feedback.

But feedback, coarse or fine, is a means, not an end. To what purpose are these efferent systems directed? I have used ‘sensory homeostasis’ as my title, but it would be arrogant to assume that the term is transparent in reference. To speak of home- ostasis is to speak of the dynamic maintenance of variables within bounds, against disturbance. We know well the varieties of vegetative homeostasis, and the home- ostatic mechanism related to tonus and posture, but we have paid little attention to analogous regulation on the sensory side.

I can suggest several varieties of homeostasis operative in perceptual processes. There is gross regulation of stimulus intensity, largely involving the accessory motor apparati, but undoubtedly also involving attenuation and amplification in the neural paths. Then there are the mechanisms by means of which the sensory networks in- volved in the business of property filtration are maintained in states permitting constant report. The extraction of stimulus invariants has been likened to the selection of a subset of all stimulus patterns. But the members of the subset are themselves numerous and various, and we should be lucky indeed to find a fixed mechanism reporting indiscriminately all members of the subset. It comports far better with our knowledge of biological ways and means to suppose that the mechanisms for the extraction of invariants are not themselves invariant; the organism may have to do something, in order that two distinguishable items may be seen as the same.

Yet another form of homeostasis has to do with habituation. There is evidence (Keidel et al., 1961) that the process of habituation is not so much a reduction of responsivity as it is a sensitization to deviation from the habituating stimulus. Hence there may be an overriding homeostatic mechanism insuring maximal discriminatory power whereler the repetitive or constant stimulus lies in the sensory sFectrum. Thus the power of selection, in terms of the size of the set from which the organism is prepared to make a choice, is maintained constant whether he is alerted to a whole References p. 119jl20

Page 127: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I18 G E O R G E W. Z O P F JR.

modality or to a fraction thereof. Undoubtedly a similar mechanism is operative in the general phenomenon of competition between specification of what there is, and when it is.

Homeostasis is certainly shown in the highest orders of perceptual constancy, in all those cases where the perception is preserved though the stimulus changes. Internal states, the facilitatory or anticipatory sets of the organism (Sperry, 1955), will tend to keep the perception integral and defined. Indeed, perception, rather than be con- sidered as the elaboration of pictures of the environment, might better be considered as the development of dejinitions of the environment. And these definitions will be operational definitions, given in terms of the motor and glandular responses ap- propriate to them. The ultimate internal representations of external events are in- structions to perform, patterns not of the objective relations among the events of the environment, but of the elemental or stereotypic motor and glandular unit processes which may combine and act to reduce the disturbance of essential variables.

As cyberneticists, we are obliged to see the brain as a regulator, to see its task as the countering of environmental variety by internal variety, which must ultimately be expressed by motor and glandular reactions upon the environment. Hence it is ap- propriate to this view that sensory representation be expressed in terms of motor and glandular coordinations (Sperry, 1952) . And it is appropriate that central mechanisms should influence sensory processing in such a way that our view of the environment is at the mercy of what we are prepared to do about it.

Hence we come full circle. An apple is to eat, to grasp, to throw, and not a logical compound of properties devoid of compulsion. We do not see in order to know the world, but to exist in it. In a complex and dynamic world, to exist is to act, to do. We are actors and poets first, and scientists only later and at leisure.

A C K N O W L E D G E M E N T

Work supported in part by contract NSFG 17414 of the National Science Foundation.

S U M M A R Y

Efferent pathways to periferal receptor organs have long had at least ambiguous testimony, and recent years have witnessed some exploration of the functional patency of such paths. Two classes may be distinguished: (1) innervation of the accessory motor apparati of the sensory systems, and ( 2 ) efferent fibres ending on or near the receptor cells themselves.

While the first class may be tentatively assigned the role of attenuating or normal- izing an intensity dimension of the stimulus, no such simple assignment of a role can be made for the second class. Possible assignments of function are discussed, and an argument given for a role in the general phenomena of perceptual constancy. A connection between the function of these retrograde loops and the assumption of internal modeling of the world is developed.

Page 128: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S E N S O R Y HOMEOSTASIS 119

It is argued that the internal model does not maintain a consistent homomorphism

The utility of efferent sensory control in exploiting stimulus constraints (input to stimulus configurations, nor is it entirely determined by experience.

redundancy) is discussed.

R E F E R E N C E S

AUTRUM, H., (1958) ; Electrophysiological analysis of the visual systems of insects. Experimental Cell Research, Suppl. 5, 426-439.

BRAZIER, M. A. B., (1958); The control of input from sensory receptors by output from the brain. Proceedings of the First International Congress on Cybernetics. Paris. Gauthier-Villars (pp. 857-861).

CHAMBERS, W. W., LEVITT, M., CARRERAS, M., AND Liu, C. N., (1960); Central determination of sensory processes. Science, 132, 1489.

DESMEDT, J. E., (1 960) ; Neurophysiological mechanisms controlling acoustic input. Neural Mecha- nisms of the Auditory and Vestibular Systems. G. L. RASMUSSEN AND W. F. WINDLE, Editors. Springfield, Illinois, C. C. Thomas (pp. 152-164).

DESMEDT, J. E., AND MECHELSE, K., (1958); Suppression of acoustic input by thalamic stimulation. Proceedings of the Society for Experimental Biology and Medicine, New York, 99,172-775.

DESMEDT, J. E., AND MECHELSE, K., (1959); Corticofugal projections from temporal lobe in cat and their possible role in acoustic discrimination. Journal of Physiology, London, 147, 17P.

DESMEDT, J. E., AND MONACO, P., (1961); Mode of action of the efferent olivo-cochlear bundle on the inner ear. Nature, 192, 1263-1265.

FERNANDEZ-GUARDIOLA, A., ROLDAN, E., FANJUL, L., AND CASTELLS, C., (1961); Role of the pupillary mechanism in the process of habituation of the visual pathways. Journal of Electroencephalography and Clinical Neurophysiology, 13, 564516.

GALAMBOS, R., (1956); Suppression of auditory nerve activity by stimulation of efferent fibers to cochlea. Journal of Neurophysiology, 19, 424437.

GALAMBOS, R., (1960); Studies of the auditory system with implanted electrodes. Neural Mechanisms of the Auditory and Vestibular Systems. G. L. RASMUSSEN AND W. F. WINDLE, Editors. Springfield, Illinois, C. C. Thomas (pp. 137-151).

GALAMBOS, R., MYERS, R. E., AND SHEATZ, G. C., (1961); Extralemniscal activation of auditory cortex in cats. American Journal of Physiology, 200, 23-28.

GALAMBOS, R., AND RUPERT, A,, (1959); Action of the middle ear muscles in normal cats. Journal of the Acoustical Society of America, 31, 349-355.

GERSHUNI, G. V., (1959); Central regulation of discharges in the periferal neuron of the auditory system. Sechenov Journal of Physiology, USSR, 45, No. I , 24.

GERSHUNI, G. V., KOZHEVNIKOV, V. A., MARUSEVA, A. M., AVAKYAN, R. V., RADIONOVA, E. A., ALTMAN, J. A., AND SOROKO, V. I., (1960); Modifications in electrical responses of the auditory system in different states of higher nervous activity. Journal of Electroencephalography and Clinical Neurophysiology, Suppl. 13, 1 1 5-124.

HERNANDEZ-PE~N, R., (1 955) ; Central mechanisms controlling conduction along central sensory pathways. Acta neurologica latino-america, 1, 256-264.

HERNANDEZ-PE~N, R., (1 961) ; Reticular mechanisms of sensory control. Sensory Communication. W. A. ROSENBLITH, Editor. New York, M.I.T. Press and John Wiley and Sons (pp. 497-520).

HERNANDEZ-PE~N, R., GUZMAN-FLORES, C., ALCARAZ, M., AND FERNANDEZ-GUARDIOLA, A., (1 956) ; Photic potentials in the visual pathway during ‘attention’ and photic ‘habituation’. Proceedings of the Federation of American Societies for Experimental Biology, 15, 91-92.

HERNANDEZ-PEON, R., AND SCHERRER, H., (1955); ‘Habituation’ to acoustic stimuli in cochlear nucleus. Proceedings of the Federation of American Societies for Experimental Biology, 14, 71.

HUBEL, D. H., AND WIESEL, T. N., (1959); Receptive fields of single neurons in the cat’s striate cortex. Journal of Physiology, London, 148, 574.

KEIDEL, W. D., KEIDEL, U. O., AND WIGAND, M. E., (1961); Adaptation: loss or gain of sensory information? Sensory Communication. W. A. ROSENBLITH, Editor. New York, M.I.T. Press and John Wiley and Sons (pp. 319-338).

LETTVIN, J. Y., MATURANA, H. R., MCCULLOCH, W. S., AND PITTS, W. H., (1959); What the frog’s eye tells the frog’s brain. Proceedings of the Institute of Radio Engineers, 47, 1940-1951.

Page 129: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

120 DISCUSSION

LINDSLEY, D. B., (1958); The reticular system and perceptual discrimination. Reticular Formation of the Brain. H. H. JASPER et al., Editors. Boston, Little, Brown, and Co. (pp. 513-534).

LIVINGSTON, R. B., (1958); Central control of afferent activity. Reticular Formation of the Brain. H. H. JASPER et al., Editors. Boston. Little, Brown, and Co. (pp. 177-185).

LIVINGSTON, R. B., (1959); Central control of receptors and sensory transmission systems. Handbook of Physiology, Sec. I : Neuro-Physiology, Vol. I . H. W. MAGOUN, Sec. Editor. Baltimore, Williams and Wilkins (pp. 741-760).

SPERRY, R. W., (1952); Neurology and the mind-brain problem. American Scientist, 40, 291-312. SPERRY, R. W., (1955); On the neural basis of the conditioned response. British Journal of Animal

SPRAGUE, J. M., STELLAR, E., AND CHAMBERS, W. W., (1960); Neurological basis of behavior in the

YOUNG, J. Z., (1962); The retina of cephalopods and its degeneration after optic nerve section: the

Behaviour, 3, 41.

cat. Science, 132, 1498.

optic lobes of Octopus vulgaris. Philosophical Transactiorls of the Royal Society, London, B, 245, 1-58.

D I S C U S S I O N

WELLS: How much evidence is there really for these efferent innervations of receptors? ZOPF: Although there is good evidence of efferent innervation of sensory nuclei,

the evidence of innervation of receptors themselves is spottier. There is good ana- tomical evidence for the muscle spindles and for the hair cells of the cochlea. There is less reliable physiological evidence for the visual receptors. The point to be made is that there is efferent connection to the upstream portions of the sensory systems; whether this terminates on receptors in all modalities is beside the point. Efferent input to bipolar or ganglion cells in the retina can be as effective, though different in action, as input directly to rods and cones.

I think there need be no doubt that there are efferent fibers to all the sensory systems, and that they are functional.

MCCULLOCH: There is only one exception: the olfactory system. We have studied it quite extensively, but never found any evidence for efferent connection.

ZOPF: Hagbarth and Kerr, however, did report central influence on olfactory bulb activity.

DROOGLEVER FORTUYN : What is the evidence for the functional significance of efferent fibers to the sense organs? Are these changes in threshold, or of the function of the eye, or in the retinogram under the influence of central stimulation? I doubt whether such a system exists in the mammalian eye.

ZOPF: I cannot say anything about the functional patency of such a system in the mammalian eye. I would venture that the presence of efferent innervation will provide for modulation (and even determination) of sensory input, by affecting re- ceptor sensitivity and alteration of coded signals as well as threshold, d.c. potentials, etc., of secondary and higher sensory neurons. Functionally, I would identify efferent systems with attention, adaptation, habituation, and even discrimination. As 1 sug- gested in my paper, efferent control may have a simple vegetative, homeostatic function : to maintain the operation of the periferal neural sensory apparatus constant in function over the vagaries of some dimensions of the stimulus. It may do no more than ‘band limit’ impulse frequency, for example. Or it may have the more significant

Page 130: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S E N S O R Y H O M E O S T A S I S 121

function of ‘setting’ the periferal apparatus for particular anticipated stimulus con- figurations or features.

MCCULLOCH: In the case of the mouse and the rat the feedback from the brain to the eye is so fast in changing the receptivity of the eye that we have never been able to solve it. The mouse just sits there and you do get responses or you do not get responses from its eye according to whether it is interested or not.

ZOPF: In other words, mice can shut their minds as well as men. GOLDACRE : Is it possible that the physical basis of ‘paying attention’ is the inhibition

of these efferent pathways to the sensory organs that you are suggesting? ZOPF: It has been suggested that the part the reticular formation plays in attention

is just such a selective suppression of all sensory input save that which is attended to. But it may also give selective or diffuse activation of more central structures, such as sensory cortex.

Page 131: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

122

Information Sensorielle et Information Conceptuelle dans 1’Activitk Bio-Electrique du Cerveau

E R N E S T H U A N T *

I. A P E R C U S G ~ N I ~ R A U X S U R L ’ I N F O R M A T I O N N E U R O - S E N S O R I E L L E

A . Information et dipirentiation des ricepteurs Dans la science des micanismes, I’information ne comporte souvent qu’un seul

mode de variations d‘energie et souvent une variation trbs faible par rapport B 1’Cnergie d’exCcution, ou meme A 1’Cnergie totale, dite d’alimentation, de la machine : variations d’knergie lumineuse, (‘tortues’ electroniques), corpusculaire (systkmes Clectroniques en gCnCral), ondulatoire, et d’une facon globale toutes les modalitCs de l’Cnergie de commande (qui, nous l’avons vu est une notion identique B I’information) sont, en gCnCral, beaucoup plus faibles que l’tnergie d‘exdcution.

En biologie, et trhs particulibrement en neurophysiologie, cet Cnorme dCcalage de valeurs CnergCtiques est une rkgle pratiquement gCnCrale, du moins pour les neuro- dcepteurs en tant que tels, et ce d’autant plus qu’ils sont plus diffkrencits. La ‘quantitC d’information’ apparait, en biologie, nettement plus importante B CgalitC d’knergie de commande que dans les systkmes mCcaniques, et m&me plus importante encore, avec une bien moindre Cnergie de commande. Toute la physiologie des sen- sations, notamment, Ctablit ce fait, sans qu’il soit presque besoin de le commenter par des exemples prkcis. I1 suffira de rappeler ce que peut &re, par rapport B l’knergie musculaire dCgagCe dans la locomotion ou dans tout acte de motricite volontaire, celle qui correspond aux ‘feedbacks’, remontant des sens proprioceptifs jusqu’aux cellules du cervelet, par les cordons ascendants de la moelle. On comprendra ainsi l’intensitC de cette diffirence et on pourra juger en m&me temps de sa parfaite corres- pondance avec la trks haute diffkrentiation des connections neuroniques, de la pCri- phtrie sensitive A la moelle, et de la moelle aux centres supdrieurs.

I1 en est de m&me dans tous les domaines neuro-physiologiques. Une trbs minime diffkrence de concentration ionique au niveau d’une jonction synaptique neuro- neurique ou neuro-musculaire, ou encore dans le courant humoral affectant une endocrine, apportera au cerveau, au biceps ou B la glande surrknale, l’information qui commandera I’amplification Cnergktique trks mesurable qui fera finalement mou- voir notre bras ou accClCrer nos battements cardiaques. Nous savons, d’autre part, que des rCactions de diplacement d’iquilibre ou s’appliquent les lois de Le Chitellier et

* 1’Adresse actuelle: 9, Avenue Niel, Paris.

Page 132: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S E N S O R I E L L E E T C O N C E P T U E L L E 123

Van ’t Hoff et celle de Goldberg et Waage sur l’action de masse, dCterminent une ‘tendance’ inverse (c’est-a-dire vers le retablissement de 1’Cquilibre) qui peut continuer ti elle seule, le message d’une information biologique. Or, ces tendances ne mettent en jeu que des quantites infimes d’knergie tlectro-ionique. I1 en est ainsi dans tous les phtnombnes biologiques, lorsqu’entrent en cause des reactions rdversibles.

Ce caractbre extr@mement dense au point de vue valeur propre, extr6mement deli6 au point de vue rapport 6nerg6tique que possbde l’information biologique, par rapport a celle des mkcanismes, doit apparaitre comme une fonction directe de la differenciation des uns et autres systkmes. Ou plutat ce caractkre de l’information biologique confirme d’une faqon trbs precise ce que les cybernCticiens avaient dkja observC au seul point de vue des mkcanismes: la quantite‘ d’information est fonction directe du degre‘ de diffirenciation, et le point de vue neuro-physiologique va nous permettre d’y ajouter ceci : son e‘nergie de transfert est fonction inverse de cette diye‘renciation.

Le rapprochement des elements de cette double caractirisation de l’information biologique, semble aboutir A un rCsultat Cminemment paradoxal. Physiquement, il presente m&me une apparence contradictoire et le contraire - la quantitC d’infor- mation, fonction directe de la quantite d’Cnergie de transfert - semblerait devoir s’imposer.

Ceci montre bien que la notion d’information doit @tre plus qu’une notion stricte- ment Cnergktique, et c’est ici que la neuro-physiologie peut retourner a la cybernktique le service que celle-ci lui a rendu. ‘Un message, nous dit Wiener, est une suite continue ou discontinue d’evknements mesurables distribues dans le temps.’ Mais la quantite d’information correspondant a un message va dCpendre d’une notion que l’on pourrait presque appeler ‘qualitative’, relike directement au degrC de diffirenciation d’un rCcep- teur, c’est-a-dire a ses possibilitis de se laisser affecter par telle ou telle partie du message. Car, il n’y a aucune raison pour qu’un message soit toujours absolument simple, et meme quand il l’est, il peut imprimer, par un processus uniforme, son information dans un rdcepteur polyvalent. La dite information sera ainsi quantitative- ment multipliCe par les possibilitCs d’exkcution - on voudrait presque dire, la surface sensible - de ce rkcepteur. Mais il y a plus encore: en biologie, l’origine de l’infor- mation est rarement uniforme. Elle l’est trks souvent a une premibre phase de ses relais de parcours : par exemple, dans l’impression directe d’une Cnergie extCrieure sur un rCcepteur sensoriel (retine, tympan . . .) mais ensuite elle se diffuse suivant de vdritables transformateurs neuro-sensoriels (cellules de Corti pour l’oui‘e, rCseau sympathique terminal pour les glandes, etc.). Dans chacun de ces diffuseurs, l’Cnergie informatrice emprunte alors, sur des modalitks CnergCtiques nouvelles (soit plus faibles, soit parfois plus fortes) pouvant s’additionner i?i la modalit6 primitive, d’autres valeurs provenant du diffuseur lui-meme. Ainsi de nouvelles propriCtCs enrichiront, a leur tour, la quantitt d’informations, mCme si le transfert PnergPtique est encore diminue‘.

Donc, suivant une correspondance que nous avons d6jB maintes fois rencontrCe en d’autres etudes, il semble bien que dans cette notion d’information, le fait biologique rCvble des particularitks qui tiennent a son organisation meme. Et ceci parce qu’en particulier, pensons-nous, il Clbve au niveau des systbmes it grande Cchelle certaines propriCtCs latentes de la nature rCelle, qui ne se laissent pressentir au physicien qu’au

Page 133: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

124 E R N E S T H U A N T

niveau des systkmes infra-atomiques. I1 rCalise d’emblCe et dCmontre, par de multiples rkalisations constamment renouvelees, un fait que la physico-mathkmatique des mi- canismes laissait dCja pressentir, mais que la biologie illustre de multiples exemples et ainsi renforce d’une effective autoritt: a savoir que l’information peut Ctre plus qu’une variation CnergCtique et que la ‘quantitk d’information’ suppose, en outre, un vkritable transfert de proprie‘te‘s.

B, Alternances e‘nerge‘tiques, de‘calages chronologiques et transfert de proprie‘tb Cette notion de transfert de propriktCs conceptuelles est u priori assez, et mCme trks

obscure. Nous avons tent6 de l’analyser dans une ktude sur les structures intellectuelles de la rCalitt dont il nous parait utile de rappeler ici quelques CICments. L’analyse de la physique mathkmatique nous amkne 2 dkcomposer les choses en molCcules et atomes, puis en particules (Clectrons, photons) qui ne sont plus eux-mCmes que des points de force, des sortes d’abstractions, support de l’action. De sorte que finalement, et par-dela le quantum d’action dont le choc, l’impact, constitue le phknomkne ob- servable, auquel tendent A se ramener tous les autres, le monde parait s’exprimer par un systknie extrzmement compliquk d’kquations, par un ensemble conceptuel a structure mathimatique. La question peut alors se poser de savoir comment peut s’effectuer le passage de cet ensemble, aux modalites repCrables et mesurables, et aux perceptions qui constituent pour nous, ce que nous appelons communkment le rCel.

I1 nous a paru possible de considkrer comment l’action envisagCe comme un concept global, peut s’identifier tout au long de ses modalitks de transforniation avec le transport de propriCtks mathkmatiques, likes - mais pouvant aussi en Ctre diffkrentes - aux constituants purement ondulatoires. Nous avons montrk comment au point de vue conceptuel, cette notion de ripartitions de propriktks suivant un mouvement ou une Cnergie, pouvait prendre un sens unique et se raccorder finalement l’un d l’autre par l’intermkdiaire, notamment, d’une notion de temps. Comment aussi, en particulier, la rkpartition de propriCtCs ClCmentaires suivant un niveau knergktique donnk, pouvait prendre un sens conceptuel comprkhensible en dargissant la notion d’e‘nergie, en un concept probabiliste qui correspond au rassemblement de tous les Ptats possibles autour d’un e‘tat certain. De toute faGon, l’idke de structures intellectuelles est inskparable du concept global de l’action, seul capable de supporter la notion de mouvement, de changement et, donc, de temporalitk.

I1 peut apparaitre ainsi que l’action, indkpendamment de sa valeur propre, et de toute modalitk de transformations A travers des effecteurs spCcialisCs, comme les rC- cepteurs neuro-sensoriels des Ctres vivants, peut Ctre messagkre de dkcalages chronolo- giques qui unissent 5 la valeur de vitesses infkrieures ou kgales B celles de la lumikre, la valeur des temps locaux et mCme la vitesse de propagation de ces temps.

Mais il nous faut bien remarquer ici, qu’a travers les effecteurs et les rkcepteurs hautement spkcialisks des appareils du neuro-psychisme humain, l’information in- hCrente & l’action proprement dite s’enrichit et se de‘veloppe, par une vkritable con- frontation, un vCritable rapport entre l’action afferente elle-mCme et les diverses modalit& de transformation qu’elle dkclenche travers ces appareils. Mieux que tous les autres, l’tltment biologique des centres cerkbraux est lui-mCme un opkrateur -

Page 134: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S E N S O R I E L L E E T C O N C E P T U E L L E 125

c’est-&dire un modificateur d’tvtnements - et se prEte particulikrement a recevoir et a Cmettre ces sCquences d’alternatives dont la thtorie mathkmatique de l’information affirme qu’elles sont la meilleure facon de transporter un message. Par exemple, il apparait trks probable qu’au plus profond de son organisation, au plus fin de sa dissection micro-histologique et micro-chimique, I’btre vivant est sensible ri la cliscon- tinuite‘ quantique que la physique mathkmatique dkmontre etre le substrat de la nature rkelle. I1 est, nous l’avons vu et le verrons plus loin, sensible Cgalement A des dkcalages chronologiques d’ordre extremement minime.

Songeons, par exemple, que la perception visuelle du relief des objets dipend es- sentiellement en dernikre analyse du dCcalage chronologique entre les photons Cmis par les parties de l’objet les plus proches et les plus Cloignb de notre rCtine, c’est-&dire de I’ordre de quelques millimktres ou centimktres divisCs par 3.109 ou 3.1010.

Ces dCcalages et cette discontinuit6 rCalisent certainement les deux moyens essentiels par lesquels la nature rtelle lance ses alternatives fondamentales qui peuvent toutes se rCsumer par cAte possibilitk : prtsence d’une certaine quantiti d’C1Cments d’action A un instant dCterminC; absence d’une certaine quantitC d’C1Cments d’action dans un intervalle de temps local donnC.

I1 est donc possible, suivant ces donnCes gCntrales de figurer un schema gCnCral de la distribution temporo-spatiale de I’information au niveau des centres cCrCbraux, en portant en ordonnCes la valeur A Ql de la quantite d’action informative qui s’exerce aprks un intervalle chronologique libre d t l par rapport A la prCcCdente de l’instant t , puis une nouvelle valeur de 1’Cnergie informative d Qz qui s’exerce aprks l’intervalle libre At2 (s’ajoutant au temps dt l ) , puis d Q3 pour l’intervalle libre At3 (s’ajoutant au temps t + At1 + Ata) puis d Q 4 pour l’intervalle At4 etc. , . On obtient ainsi sur le

0 (1.01 A t 2 A t 3 At4

A Qn

A t n Fig. 1 . SchCma figuratif de la distribution temporo-spatiale de l’information pour des sequences -.

plan des axes de rCfCrence, non point une courbe, mais un systkme de points isolCs 11, 1 2 , 13 etc. (Fig. 1). Cependant il nous parait lCgitime de considCrer que la valeur globale de l’information au point In est fonction directe du rapport

A Q n - AQn-1

Page 135: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

126 ERNEST H U A N T

qui revient a dire que si Yon joint par une droite les points 11, ZZ, Z,, les valeurs successives globales de l’information peuvent Ctre reprtsenttes en fonction des tangen- tes trigonomitriques des segments Z1 1 2 , 1 2 13, etc. avec I’axe des temps.

Une telle representation graphique permet de se rendre compte immtdiatement que mCme pour une petite valeur de OQ, l’information peut Ctre trbs grande si At est suffisamment petit, et que mCme pour une grande valeur de OQ, I’information peut Ctre trks mCdiocre si A t est suffisamment grand. Un cas particulier limite - mais thtorique - inttressant est celui d’une stquence de mCmes valeurs Cnergttiques stpa- rCes par de m&mes intervalles chronologiques. La reprtsentation graphique ci-dessus proposte devient celle d’une droite 11 1 2 1 3 parallhle A l’axe des temps, il n’y a en rCalitC

a

Fig. 2. Cas particulier oh les m&mes sequences energttiques sont s6parBes par les mZmes intervalles chronologiques: I , -+ 0.

pas d’information veritable en dehors de celle de la premikre sCquence A Q/At . . . qui ne fait que se rtptter identique A elle-mCme, les informations constcutives Ctant nulles (Fig. 2). De plus on peut se rendre compte que mEme cette premikre information finit par perdre son caractbre informatif suivant sa propre rtpttition . . . Le point 11 peut Ctre remplacC dans la suite chronologique par les points 1 2 , 1 3 , . . I,, c’est-8-dire que la droite 0 1 tend vers une position parallble B I’axe des temps, donc vers une valeur de tangente nulle.

11. L ’ I N F O R M A T I O N S E N S O R I E L L E ET L E S R B A C T I O N S D ’ A R R E T D E S O N D E S C ~ R B B R A L E S

Ce qui dCmontre encore mieux le rBle du transfert de propriktts que nous avons analyst5 plus haut, c’est leur parfaite concordance avec certains faits, que rCvkle l’ttude des ondes ctrtbrales dans leur rapport avec la physiologie des sensations, notamment avec la perception visuelle.

Dks ses premikres recherches, Berger avait not6 un phtnombne d’une impor- tance capitale qui a servi dam la suite aux techniciens de la cybernktique A mieux comprendre la ntcessitt des temps de dtconnection des machines A calculer Clectroni- ques. C’est la rkuction d’urrct : c’est-&dire la suppression temporaire des ondes a, rythme fondamental de l’activitC Clectrique du cerveau, sous l’influence de l’excitation

Page 136: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S E N S O R I E L L E E T C O N C E P T U E L L E 127

visuelle. Mais d’autres sensations et mtme l’effort intellectuel d’intCrbt et d’attention peuvent provoquer la rtaction d’arrbt. Chose extrcmement intkressante et qui corro- bore trbs prtcisCment l’importance du transfert de proprittts que nous venons d’Ctu- dier, ce n’est pas un flux lumineux homogbne mais bien un champ httCrogbne qui provoque cette action suspensive. Ainsi ce n’est pas la perception de la lumibre, mais bien celle des formes et des objets qui agit ici. Un sujet peut rester les yeux grands ouverts dans un champ visuel illumint d’une manibre uniforme, il n’existera pas de modification nette du rythme a ; Loomis, Harvey et Hobart I’ont dCmontrC d’une faCon trks simple en faisant regarder a un sujet une feuille blanche et une montre: le rythme ne disparait qu’h la vue de la montre. Ainsi l’information de la perception visuelle est-elle bien, en dCfinitive et avant tout, une information sur des rapports mathtmatiques, sur des propriCtts de structure et en msme temps nous pouvons trouver dans ces faits une confirmation de notre schtma figuratif de l’information proposC plus haut.

La chose est encore plus curieuse, si l’on adniet, comme Jasper, Cruikshank et Howard semblent l’avoir dCmontrt, que l’effort pour percevoir peut avoir le m2me rCsultat que la perception d’un flux htttrogbne. Un sujet qui, dans I’obscuritt, cherche a percevoir non pas une tache lumineuse quelconque, mais bien tel ou tel objet, dC- clenche une reaction d’arrzt. I1 semble 18 que c’est surtout l’effort intellectuel, the attempt to see (comme l’tcrivent Loomis, Harvey et Hobart) qui est ici 1’616ment fondamental du temps d’arrbt. Mais cet effort pour voir, et pour voir quelque chose, correspond bien aussi 8 une information, A une information dtclenchCe par la mt- moire, et la volontt de se diriger. Nous voyons apparaitre ici pour la premigre fois dans cet expost, cette notion d’un inttrct qu’il est inutile de souligner qu’au point de vue de I’information biologique, un concept peut &re Pquivalent ci une sensation et peut produire lui-nGme des influx analogues h ceux qui transportent l’information senso- rielle. Ceci a une tnorme importance pour l’explication de certains comportements humains, qui dCpassent la stricte logique cybernttique.

Le r61e des facteurs psychiques sur l’activitC tlectrique de base du cerveau se retrouve en effet, aussi bien chez I’animal que chez I’homme. Ainsi, dbs 1936, Ectors, exptri- mentant chez le lapin, Ccrivait que pour produire la modification caracttristique des potentiels d’action ‘il ne suffit pas d’appliquer une stimulation sensorielle, il faut encore que l’animal se trouve dans un Ctat de rtceptivitt indiqut par des reactions expressives’. Et plus loin: ‘Une observation attentive du lapin au moment oh on lui prtsentait un objet h sentir, permettait h coup stir de savoir s’il y aurait ou non, modification des potentiels corticaux. En effet, chaque fois que I’animal flairait, les ondes alpha s’atttnuaient ou disparaissaient, chaque fois qu’il ne flairait pas, on re- trouvait les ondes alpha avec leur amplitude normale’. Bien plus si, repitant une exptrience d’Ectors, on pique avec une tpingle la patte anttrieure d’un lapin, la sensa- tion douloureuse dttermine sur les potentiels corticaux (recueillis au niveau du centre masticateur) la suppression des ondes a, et aussi, comme d’habitude dans ce genre d’exptrience, une amplification des ondes p, aprbs quoi les ondes a rkapparaissent. Ceci est la rCaction caracttristique a I’information apportke par le circuit sensitif. Mais si aussit8t aprbs cette premibre exptrience, on approche l’epingle de la patte de

Page 137: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

128 E R N E S T H U A N T

l’animal, sans cependant le toucher, celui-ci manifeste la rCaction extkrieure emotive de la peur, et cette reaction uniquement psycho-physiologiyue dktermine aussi la reaction d’arrCt : suppression des ondes a (sans cependant, dam ce cas, d’amplifica- tion des ondes (3). LA encore l’information a C t t le transport d’une rCaction psychi- que et non pas seulement d’une sensation. Chez le chat, 1’Ctat d’excitation, ou simple- ment d‘alertness, c’est-&-dire d’attente, ou d’apprihension, ambne Cgalement la sup- pression du rythme lent cortical (expkriences de Rheinberger et Jasper).

Chose remarquable, et qui confirme ici notre second schema de rkpartition de l’information cCrCbrale, la rkpttition de la m&me excitation entraine chez l’animal un t tat d’accoutumance, qui finit par entrainer des modifications de moins en moins nettes de 1’activitC rythmique qui finalement y reste insensible. En termes cybernktiques, on pourrait dire que l’information reproduisant constamment le mCme message, tend A perdre ainsi sa valeur d’information. Elle ne presente plus de discontinuit6 rCelle (puisqu’elle rkpbte constamment les mCmes Cltments) et perd en mCme temps ses sC- quences d’alternatives (puisque l’alternative est remplacCe par une succession de motifs identiques).

On peut rapprocher ces faits de l’indifference des ondes a, chez l’homme, A un flux lumineux homogbne. Par contre, si mCme sans objets ni formes aucunes, on coupe ou I’on ouvre brusquement ce flux homogbne, on obtient de nouveau la suppression du rythme fondamental. Chez un sujet (experiences $Adrian et Matthews), auquel on demande de faire effort pour voir, en gardant les yeux fermCs, on peut rCaliser la suppression des ondes a. Dans les deux premiers cas, la variation brusque du flux, l’effort intellectuel dans le troisibme ont CtC B l’origine d’une information qui a produit le m&me effet inhibiteur sur 1’activitC Clectrique rythmique du cortex cCrCbral.

Les sensations tactiles et auditives (tout au moins brkves et interrompues) produisent en gCnCral sur les dlectroenctphalogrammes, des rkactions d’arrCt analogues a l’effet des stimuli visuels. Mais beaucoup plus intkressants sont les effets d’une coexistence des stimuli. Durup et Fessard ont montrC que la suppression des ondes a provo- quCe par une reception visuelle pouvait Ctre annihilCe et mCme remplacte p w une augmentation de l’amplitude rythmique, par un stimulus auditif simultank, par exemple un bruit pkriodique.

La notion d‘accoutumance subsiste, avec toute son importance, dans les reactions de 1’act;vitC Clectrique spontanke du cerveau, a des stimuli auditifs continus (chanson monotone) et qui n’altbrent pas le rythme a, et parfois mCme en augmentent l’ampli- tude. De mCme le travail mental rigulier, selon Adrian, Matthews et Delay, ne provoque qu’une lCgbre diminution de l’amplitude des ondes a, et parfois msme ne les altbre pas du tout. Par contre toutes les variations brusques des stimuli - des questions posCes ii l’improviste, une brusque decision ii prendre - dktermineront la rCaction d’arrCt. Tous ces faits montrent bien le r81e important des facteurs psycho- dmotifs dans l’information neuro-sensorielle.

Les neuro-psychologues qui se sont attach& 9 l’ktude des 6lectroencCphalogrammes ont dCterminC que la reaction d’arrCt s’opbre aprbs un temps de latente variable suivant les stimuli sensoriels et s’dchelonnant entre 0.09 et 0.70 sec, les excitations visuelles comportant le temps de latente le plus rCduit. I1 existe mCme une relation logarithrnique

Page 138: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N SENSORIELLE ET C O N C E P T U E L L E 129

entre ce temps de latente et I’intensitC d’un Cclairement brusque d’une durCe dCterminCe (0.1 ou 0.2 sec). Si Yon donne une valeur constante au produit de l’intensitt par la durte on retrouve un temps de latente constant. Enfin on appelle temps d’adaptation le temps nkessaire A I’apparition de l’accoutumance, c’est-A-dire A la reprise d’une amplitude normale des ondes a, en prisence de stimuli rCpCtCs.

L’ftude de l’activiti Clectrique spontante du cerveau et des modifications rythmiques suivant les stimuli sensoriels a conduit les neuro-psychologues A d’interessantes con- clusions sur la topographie bio-Clectrique du cerveau, et surtout, sur l’interprttation de certains dtsordres pathologiques. On a constatt par exemple que les excitations visuelles agissaient Clectivement sur les ondes occipitales, les excitations tactiles sur les ondes pricentrales, tlCment capital pour situer la projection des aires sensorielles au niveau du cortex. Au point de vue pathologique, nous retiendrons, pour rester dans le cadre de notre sujet, I’existence possible d’un rythme a normal chez les aveugles (c’est-A-dire en l’absence de tout stimulus visuel) et surtout le fait que les hallucinations visuelles dtterminent des zones de silence du rythme a. Le fait est particulibrement important, car en nous plaGant du point de vue cybernktique, nous dirons ici que l’information est issue directement d’un processus cerebral, sans stimuli directs visuels exttrieurs, et uniquement d’une perturbation du chimisme interne d’une certaine couche du cortex.

L’information n’est pas ici rattachte a un transport CnergCtique supportant des propriktts de structure, et se rattachant A un processus psychique ou A un concept. Elle provient du milieu humoral intoxiqut (hallucinations mescaliniques) ou dans le cas de la suggestion hypnotique, d’un ‘ordre’ directement percu par le psychisme. On peut supposer, dans un cas comme dans l’autre, qu’elle met en branle des proprittts structurales ou colorimCtriques conservtes par la mtmoire. L’extraordinaire, c’est qu’elle se comporte, en son essence, exactement comme I’information issue des stimuli CnergCtiques, et qu’elle semble Cgalement exiger, pour Ctre ‘recpe’, la suppression, ou du moins la diminution de l’activitt rythmique spontanke.

Ainsi ces altCrations pathologiques confirment I’unitC profonde qui semble rattacher l’information sensorielle A l’information issue d’un concept, et montrent que dans un cas comme dans I’autre, les transferts CnergCtiques affkrents aux centres sont bien les tCmoins, sinon les transporteurs, d’un complexe de propriCtCs. Nous percevons ici un caractbre essentiel qui est susceptible d’affecter, au moins dans ses formes les plus hautes, l’information biologique : Le concept est susceptible de ‘s’informer’ dans un transfert knerge‘tique.

Nous avons personnellement cherchC A complCter ces donntes par une strie d’ex- periences dCrivCes de notre mCthode de recherche des tests de mesure de l’acuite de perception temporelle et des intervalles de non-simultanCitC des perceptions sensoriel- les, qui nous sert A apprtcier I’action de certaines substances sur le tonus neuro- psychique de divers Ctats pathologiques, dont ceux de la sknescence. Nous avons donc utilist la mtthode des voyants lumineux sur des sujets en prise d’C’ectroenctphalo- grammes suivant les techniques habituelles et nous avons pu noter les rtsultats sui- vants.

Page 139: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

130 E R N E S T H U A N T

Le voyant ttant constituC par une croix de Malte, tclairte en rouge, on note d’abord sur 1’Clectroenctphalogramme la diminution prevue du rythme a.

Puis le rythme ttant redevenu normal, on fait rtapparaitre la croix rouge en disant au sujet ‘Cette croix est-elle verte?. . N’est-ce pas qu’elle est verte?. . .’ Les rtsultats des neurogrammes sont alors toujours perturbes, mais pas toujours dans le m6me sens. Nous avons pu, a ce propos, classer les maladcs en deux cattgories.

psychisme bien tquilibrt rtvklent a ce test une diminution plus nette et plus rapide et plus durable du rythme a dans 74% des cas, et une moindre diminution dans 17% des cas, une indifftrence du rythme a dans 9 % des cas (avec mCme une certaine tendance a l’accentuation) (nombre total de sujets: 46).

(b) D’autre part, chez des sujets fatiguts, stnescents ou grands enfants instables, le rtsultat s’est prtsentt suivant un ordre inverse: les cas de moindre diminution, c’est- B-dire de rtaction d’arrtt moins sensible, l’emportent avec 65 ”/, sur les cas de rtaction d’arrCt accrue: 35% (nombre total de sujets: 52).

On peut interprCter ces rtsultats a la lumikre des donntes prtctdentes en considt- rant que le concept de la couleur suggCrte a dtclenchC une information qui s’est surajoutte dans le m&me sens aux stquences d’alternatives de I’information sensorielle resultant de la couleur perCue . . . donc accentuant la valeur de celle-ci et qui, partant de ce fait, augmente l’intensitt de la rtaction d’arr6t. I1 est normal de constater que ce fait se produit surtout chez les sujets calmes et stables qui, au test de la ‘question contradictoire’, rttablissent aussitat la realitt et pour lesquels au contraire la suggestion de la couleur fausse ne semble servir qu’a faire ressortir davantage les contours de la couleur vraie et donc a amplifier l’information totale issue de celle-ci . . . Comme l’intensitt du voyant rouge n’a pas variC il faut donc bien admettre yue l’information conceptuelle s’est traduite par un accroissement des charges CnergCtiques a certains relais du circuit informatif. Ces relais sont tvidemment internes et vont vraisemblable- ment d’un centre neuro-perceptif a l’autre. 11s doivent mettre en jeu des phCnomines d’inhibition sur certains trajets, et de ‘facilitation’ sur d’autres.

Quant a la seconde cattgorie de faits observis, qui s’ttablit surtout dans le cas de sujets asthtniques, stnescents ou instables, et aboutit a un plus grand nombre de rtactions d’arr6t moins caracttristiques et moins caractCrisCes que dans la premiire phase de l’exptrience, nous pensons que l’on peut la rattacher a ce que nous avions appelt en des travaux anterieurs, la derivation prtftrentielle d’influx. I1 s’agit du phtnomine qui s’observe quand on fait agir simultantment deux stimuli sensoriels : auditif et visuel par exemple. On observe alors beaucoup moins la diminution des ondes a, et parfois m6me il faut noter une augmentation de leur amplitude. Et ceci rtsulte d’une interaction nkgative de l’un ou l’autre des stimuli (en l’espkce ici de l’auditif sur le visuel) qui diminue l’amplitude de la rttro-action d’tquilibre entre l’tnergie biodlectrique du cerveau et I’influx informateur. Nous reviendrons plus loin sur ce point.

Mais pour l’interprttation de la strie exptrimentale que nous examinons, nous re- marquerons que 1’Ctat psychique particulier des sujets de ce groupe fait que le concept colort, affirme et suggtrt, parait s’impoFer de leur cortex B leurs centres neuro-percep- tifs comme une information conceptuelle qui prtdomine sur l’information sensorielle.

(a) D’une part, les sujets

Page 140: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N SENSORIELLE ET C O N C E P T U E L L E 131

I1 y a donc l’e‘quivalence de deux stimuli simultane‘s . . . Le sujet pense ‘vert’ beaucoup plus qu’il n’a conscience de percevoir le ‘rouge’. De l’un A l’autre de ces deux circuits informatifs il y a non plus addition et juxtaposition comme dans le premier cas, mais dkrivation prifkrentielle d’influx qui ne rend plus nCcessaire la diminution des ondes a, et peut m&me les laisser augmenter d’amplitude.

Nous avons repris une autre sCrie d’expkriences du mCme type par rapport A l’in- formation auditive en faisant entendre au sujet en prise d’ClectroencCphalogramme un morceau de piano et en posant la question: ‘C’est du violon que vous entendez, n’est-ce pas que c’est du violon!’, l’instrument Ctant bien entendu dissimult dans une pittce voisine. Les rCponses ont CtC globalement de mCme type qu’avec l’exptrience des voyants lumineux et ont CtC rapportCes aux mCmes categories de sujets. Toutefois, l’inversion de la rCaction d’arrzt dans le groupe des sujets fatiguCs a CtC moins sensible : 52% des cas.

11 est possible, en se plaCant au point de vue cybernttique des rCtro- et inter-actions de donner une explication commode et logique du jeu des stimulations sensorielles sur les rythmes bio-Clectriques spontanCs du cerveau.

Ainsi les centres corticaux affectts par les stimuli etant considCrts comme un effecteur, l’information sensorielle comme un facteur causal, la diminution des ondes a devient un effet. Une hypothkse logique a formuler est d’admettre que la dimi- nution de l’tnergie a correspond a la nCcessitC de contre-balancer, d’annuler une partie de l’tnergie sensorielle concomitante l’information ; c’est comme s’il fallait un certain Ctat de repos, ou de moindre potentiel bio-Clectrique, pour que le cerveau soit sensible a 1’Cnergie informatrice et pour lui permettre de dCgager ses propriCtCs structurales. Ceci n’est. Cvidemment qu’une hypothkse, mais rCpCtons-le trbs vraisem-

I n . s.

Fig. 3. Retro-action de la diminution du rythme a, sur I’information sensorielle (Ins); C: cortex; I.v.: information visuelle; 1.a. : information auditive; I.A. : interaction de 1.a. sur I.v..

blable, d’aprks les donnCes neurologiques. Si on l’admet, on peut alors faire rentrer dans un cadre cyberne‘tique simple les donntes prCc6dentes e t en unifier l’interpre‘tation.

Ainsi d’aprtts Fig. 3(a), on peut considerer que les variations de l’effet (diminution

Page 141: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

132 ERNEST H U A N T

des ondes a) est like a son facteur, l’influx d’information, par une vkritable rCtroaction (la quantitC Cnergttique correspondant a la diminution a, annulant une partie de I’Cnergie de I’influx). Ce ‘feed-back’ interne, fait tendre le systhne A un vtritable Ptat

d’kquilibre entre Ies ondes a et I’Pnergie informatrice qui correspond aux conditions les meilleures de la perception sensorielle.

Le temps de latente prend alors une signification toute precise: c’est purenient et simplement le temps de retard (I’hystPrPsis) de cette retro-action.

D’aprils Fig. 3(b) qui tvoque le cas d’une dualit6 de stimulation sensorielle : visuelle et auditive, la dkrivation prtfkrentielle d’influx que nous avons analysCe correspond a une inter-action nCgative entre les deux ordres de stimuli. Cette inter-action ne rend plus nkcessaire une diminution des ondes a, et laisse m&me parfois entrevoir une possibilitC d’augmentation de leur amplitude.

Cortex b Diminution a

Fig. 4. Schema d’equilibration ‘hallucinatoire’ de la retro-action issue d’une perturbation toxique Ou psychique du rythme a.

Enfin Fig. 4 correspond aux cas des hallucinations visuelles, toxiques OLI psychi- ques; son interprttation est plus ddicate, mais aussi plus significative. Dans la dtskquilibration pathologique de la sensibilitk corticale par l’effet toxique ou psychique qui produit une diminution des ondes u, la rCtro-action correspondant au Fig. I a partir des ondes a, est alors tendue ‘a la recherche’ pourrait-on dire d’un CIC- ment tquilibrateur, puisqu’elle ne peut plus s’accrocher une tnergie informatrice de provenance externe. On retrouve la le phCnombne de tendance A 1’Cquilibration que nous avons analysC dans les pages prtctdentes, au sujet de la rtgularisation des organisnies sensibles, en tvoquant le modble des ‘tortues’ tlectroniques. Cette tendance a I’Cquilibration sur un terrain pathologiquement vicit est en elle-mCme une vCritable information, comparable, au point de vue effet, 6 celle de l’effort ou de I’abstraction conceptuelle. L’hallucination visuelle est alors la rCponse cybernktique a la rkalisation de la ‘satisfaction’ de cet tquilibre pathologique.

Comme cette rCponse est construite par des centres sptcialises dans la rCception des messages visuels, elle se traduira tout naturellement par des rtactions de m&me ordre, mais violemment perturbtes en formes et en couleurs. Ceci suppose tvidemnient qu’il n’y ait pas de possibilitts de rCtro-action effectives et efficaces entre la diminution a (c’est-a-dire un changement de I’Ctat bio-tlectrique du cerveau) et les causes pathologiques. Si cette dernibre retro-action est possible, l’tquilibre peut &tre effective- ment rCalisC sur un terrain physico-chimique stable, et n’a plus besoin de s’accrocher

Page 142: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N SENSORIELLE ET C O N C E P T U E L L E 133

A une construction hallucinatoire. On voit 1’intCrEt qu’une telle notion peut prdsenter, pour mieux comprendre la genhse des phknombnes hallucinatoires qui jouent un tel r61e dans la pathologie des sensations, et surtout pour comprendre les circonstances, parfois paradoxales, de leur apparition ou de leur disparition.

Evidemment, en pratique, les faits sont plus complexes que les schCmatisations que nous venons d’indiquer avec une simplicit6 volontaire, destinCe B rendre les notions plus prCcises dans I’esprit du lecteur. Par exemple, les interactions du Fig. 3 ne sont pas externes, comme nous l’avons fait figurer, mais internes et allant vraisem- blablement d’un centre neuro-perceptif localisC A I’autre; en outre, en plus de cette inter-action il faudrait considkrer aussi les rCtro-actions coexistantes entre la diminu- tion des ondes a et la double stimulation auditive et visuelle.

III. O B J E C T I F E T S U B J E C T I F : L E U R L I A I S O N R I ~ T R O - A C T I O N N E L L E

Cette place du transfert CnergCtique dans 1’Claboration conceptuelle peut nous aider B mieux comprendre les rapports de l’objectif et du subjectif et B ttablir entre eux une liaison cybernitique de type rCtro-actionnel. Le philosophe qui se prboccupe de ce problbme capital des rapports de l’objet et du sujet, quand il aborde le problhme de l’information, a quelques tendances B confondre totalement l’information cyber- nCtique avec I’information que l’on pourrait qualifier de ‘psychique’ et qui s’identifie pour lui B une acquisition de connaissance. Or cette information, qui est essentielle- ment un phCnombne subjectif, est en rCalitd un effet qui, B travers les centres neuro- psychiques considCrCs comme effecteur, dCpend d’un facteur causal capital, I’informa- tion, au sens cybernktique du terme, qui apparait 18 comme s’identifiant B l’objectif. Et d’aprbs ce que nous avons pu exposer plus haut, cette acquisition de connaissances aboutissant toujours a une information conceptuelle, celle-ci rtagit rCtro-activement sur l’information tnergttique causale qui s’identifie A I’objectif et I’on peut Ctablir Fig. 5. Le point de vue cybernktique appliquC au niveau de l’information neuro-

J Exphnenta t ton Is

Fig. 5. La rktro-action du subjectif A l’objectif. N.P.: centre du neuro-psychisme; I.s.: information sensorielle; I.c. : information conceptuelle.

psychique permet donc de prkciser la liaison du subjectif b I’objectif et d’en situer les donnkes. Comme nous avons eu A le dtvelopper en mars 1961 B la Sorbonne, devant la SociCtC FranGaise de Philosophie (a I’occasion d’un expos6 de de Beauregard), cette liaison s’ttablit essentiellement au long d’un processus rCtro-actionnel oc inter- vient l’information cCrCbrale sous ses deux modalitCs : I’information conceptuelle Ctant le point de dCpart du processus rttro-actif qui retentit sur la quantitC d’information

Page 143: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

134 E R N E S T H U A N T

sensorielle, c’est-&dire sur I’ultime modalitk de transformation de I’klkment objectif. Ainsi cette liaison re‘tro-actionnelle du subjectij d l’objectif apparait comme un i r k - duclibiefondamentaia’e la connaissance, dependant essentiellenient de la sptcificite m&me du sujet ou plus exactement de la fonctionnalitk spkcifique de ses centres neuro- psychiques. I1 est absolument vain et illusoire de prktendre le rkduire, ou a fortiori l’annihiler, par un souci maximum d“objectivit6’ expkrimentale : inscriptions klectri- ques, enregistrements et dktections automatiques, etc. Toutes ces prkcautions, souhaitables et perfectionnables, de rigueur fondamentale pourront servir B augmenter la quantitk d’information sensorielle vraie, c’est-a-dire vraiment relative B l’expkrience en cours. Mais, dks que cette information sensorielle dkclenche ses transformations d’action dans les centres rkcepteurs du cerveau et que s’en degage concomitamment a l’acyuisition d’une connaissance, l’information conceptuelle, la liaison rktro-active du subjectif a l’objectif s’ktablit par le fait mGme de la rGtro-action du concept sur la sensation, et cela indkpendamment de tout appareillage expkrimental, d l’intkrieur mgme des circuits de la neuro-perception ce‘re‘brale.

I1 nous parait d’un puissant intCr&t de faire remarquer a quel point le point de vue cybernktique, correctement appliqut, vient apporter de trks importantes pr6cisions de base sur un probl&me majeur de la connaissance qui suscite depuis toujours l’inttret complkmentaire des philosophes et des psychologues.

Nous pouvons de plus considkrer que c’est du passage, ou plut6t de la transforma- tion de l’action-injormation en action-perception, que se dkgage B travers les neuro- rkcepteurs du cerveau la confrontation de propriktks intellectuelles de structure, dont certaines sont sous-jacentes aux rkalitks mathkmatiques de I’action, dont d’autres sont vraisemblablement issues d’une confrontation ou d’une combinaison des premikres. Mais de cet ensemble va naitre, en m h e temps que le fait psychologique de la con- naissance, I’information conceptuelle, nouvelle forme de transformation de l’action

Fig. 6. Schema des sequences transformatrices de l’action informative dans son cheminement neuro- psychique. Les phases I et I1 sont symktriques par rapport aux recepteurs sensoriels; I11 et IV par rapport a la cohcience informatrice. Les deux elements apparaissent comme conjuguis et I’ensemble

est symttrique par rapport a la formation du concept.

Page 144: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N SENSORLELLE ET C O N C E P T U E L L E 135

qui va rCagir rCtro-activement sur l’action informatrice de base, ri l’intkrieur m h e des centres neuro-psychiques.

I1 importe donc de bien prkciser que cette information conceptuelle n’est pas Ze concept hi-meme, n’est pas non plus la cause du concept, mais au contraire, l’effet primaire, la traduction informative immCdiate de l’apparition du concept (Fig. 6).

D’un point de vue philosophique gentral, on peut envisager dans la sequence d’evknenients qui dkclenchent et transfornient l’information sensorielle, une alternance syniktrique, par rapport a l’apparition du concept, de deux phases compldmentaires.

La premibre phase correspond aux crkations expkrimentales ou aux phknombnes observables du monde physico-chimique d’ou Cmergent les Cmissions de photons ou quanta d’knergie, d’une faGon plus gCnCrale les ondes d’action qui vont frapper nos ricepteurs sensoriels. Cette phase est essentiellement entropique ou divergente (le fait m&me de l’action tendant naturellement a se diluer dans la divergence de ses ondes).

La deuxibme phase est celle correspondant a l’information sensorielle proprement dite, telle qu’elle chemine a partir des rCcepteurs externes jusqu’aux centres neuro- psychiques. C’est une phase bade essentiellement sur la diffkrentiation et la sklection des systbmes. Elle est a la fois nkguentropique (c’est-8-dire inverse de l’augmentation entropique) et syntropique, c’est-&dire basCe sur une certaine convergence des mo- dalitCs transformkes de l’action-information vers les effecteurs neuro-psychiques.

C’est 5 l’issue de cette deuxibme phase que peut se produire I’daboration conceptuelle et l’apparition de notions de proprietks de structures rCsultant des modalitis de trans- formation de l’action. On peut donc figurer cette apparition du concept comme un centre de convergence par rapport aux sequences de l’action informatrice.

A partir de h i , une troisibme phase va apparaitre, c’est la libkration de nouveaux klCments d’action correspondant a l’information conceptuelle. Cette phase - et c’est la un point fondamental - est 5 son dCpart entropique, c’est-a-dire que l’information conceptuelle tendrait par elle-mEme a se diluer en ondes d’action divergentes, si elle n’Ctait pas aussitbt diffkrencike dans les circuits neuro-rkceptifs pour aboutir a l’acqui- sition et a la sommation des connaissances, phknombne essentiellement nkguentropique et syntropique: ceci est la quatribme phase.

Bien entendu il faut faire remarquer ici que cette analyse des phCnombnes oblige a Cvoquer et A faire intervenir, pour &tre vraiment satisfaisante, le r61e de deux ClCments fondamentaux: celui d’abord de la conscience informative qui permet par exemple de saisir d’emblke dans l’information conceptuelle l’information commune A une skrie d’informations et qui se prCsente comme l’instrument essentiel de convergence aboutis- sant ii l’acquisition de la connaissance. En un second lieu, la mkmoire qui se prksente ici comme 1’CICment opCrateur permettant la sommation des connaissances d’une part et d’autre part une source capitale d’information conceptuelle de rCserve pouvant chaque moment &tre relancke directement dans de nouveaux circuits neuro-perceptifs et enrichir, ensuite, par rktro-action, l’information sensorielle concomitante. Con- science et mimoire ambnent en outre a considirer ici, comme nouveaux facteurs d’information conceptuelle la notion de teniporalitk, s’agissant ici avant tout du temps psychologique inhCrent au concept du changement, car dire que le temps est un changement perGu par une conscience, revient absolument 5 dire qu’une conscience

Page 145: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

136 ERNEST H U A N T

est la projection, dans le temps, de la perception d’un changement. Or, ce temps psy- chologique - de m h i e que le temps physique - est lui aussi de structure discontinue malgrC les apparences de continuit6 qui ne sont qu’un effet global tenant essentielle- ment aux chevauchements des perceptions. En rkalitt, nous vivons dans un eternel prksent qui psychologiquement n’est jamais semblable a hi-mCme et ce present est une notion-limite qui se detruit par le fait mCme qu’elle s’atteint. Ces discontinuitis psychologiques, le sens de leur enregistrement par la conscience informatrice, sont en elles-m&mes des sources supplkmentaires d’information auxquelles viennent se con- fronter les ClCments de l’information conceptuelle, vis-a-vis desquelles les premiers semblent agir en manikre de cribles et d’orienteurs, determinant prtcisCment le passage B la convergence neguentropique des premikres sequences entropiques de l’information conceptuelle.

Tout ceci nous amkne Cvidemment it aborder des domaines debordant quelque peu le cadre precis de cet expose, tout en le prolongeant sur un plan strictement logique dans les domaines du psychologique et - par d e b la psychologie - du spirituel. Nous ne pouvons donc qu’esquisser sommairement ici de tels prolongements - nous rCservant d’y revenir en d’autres pages. Nous soulignerons simplement 9 ce propos, A quel point les apercus dtveloppks plus haut montrent combien le point de vue cyber- nktique en neuro-physio-psychologie plus encore que dans tout autre domaine, exige que l’on sache bien regarder au dessus des cloisons Ctanches des specialisations ex- cessives et A rCagir 21 ce compartimentage artificiel de la recherche que regrettait tant Wiener et qui fait que chacun trop pr6occupC de son propre problkme ‘a une fgcheuse tendance de regarder le problkme voisin ‘comme appartenant en propre au collkgue de telle autre porte de tel cGtC du couloir’.

IV. c O N C L U S I O N s G ~ N k~ A L E S

On voit a quel point les considkrations cybernktiques sont susceptibles d’apporter quelque luniikre nouvelle sur cette antique question des rapports du psychique et de I’organique, aussi vieille que la medecine, et mEme contemporaine des premiers exer- cices de la pensee humaine. Par 1’Ctude des reactions bio-Clectriques du cerveau, des transferts Clectroniques concomitants A l’klaboration du concept, B la perception des formes et aux manifestations de la conscience informatrice. . . Par la notion des finalites d’kquilibre qui se proposent a la regulation neuro-psychique, elles permettent, non pas de mieux comprendre, mais de mieux surprendre quelques-uns des points oh s’opkre, A travers nos rkcepteurs biologiques les plus hautement spCcialisCs, la trks mystkrieuse, mais indiscutable articulation de la niatikre et de l’esprit. Finalement, le point essentiel du problkme nous parait pouvoir se raniener a ces deux notions, que le philosophe scientifique ne s’ktonnera pas de trouver, 18 aussi, complCmentaires :

I . Un concept et ses Cquivalents formels, c’est-A-dire en somme, un ensemble de proprittks de structure intellectuelle et de rapports (d’ordre intellectuel Cgalement) entre ces propriktes, se trdduit B travers nos rtcepteurs neuro-psychiques, et exacte- ment comme un stimulus sensoriel, par un transfert de charges electriques et, plus encore, par une certaine distribution spatiale et chronologique de ce transfert.

Page 146: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S E N S O R I E L L E ET C O N C E P T U E L L E 137

2. Nos rkcepteurs neuro-psychiques sont biologiquement constituks de telle faCon qu’ils peuvent traduire les sCquences de charges Clectroniques qui les affectent

travers les circuits sensoriels, par une sCrie de reprksentations formelles qui Cvoquent aussi, finalement, un certain nombre de propriktks structurales ou de combinaisons entre ces propriktks.

I1 y a Cvidemment, dans les groupements d’apport de photons et de quanta d’action auxquels se ramkne le stimulus sensoriel, des diffkrences d’ordre purement Cnergktique, mais ces diffkrences ne peuvent expliquer la gamme infinie des variCtCs de perceptions, et il semble aussi que ce soient les combinaisons temporo-spatiales de ces groupements de photons ou de quanta, qui expliquent cette extraordinaire diversit6 de rtponses.

Ainsi la haute spicificitk de nos rCcepteurs neuro-psychiques serait donc, avant tout, de pouvoir traduire en propriktks de structure et combinaisons de ces propriCtCs, les variations knergktiques, les distributions spatiales et les dkcalages chronologiques des groupes de quanta et de photons qui les affectent et, rkciproquement, de faire correspondre aux elaborations conceptuelles et formelles, des transferts CnergCtiques susceptibles de retenir, entre autre chose, sur l’activitk bio-Clectrique et les divers kquilibres organiques de type homCostatique.

Qu’une dignit6 biologique d’une valeur aussi incomparable, paraisse l’attribut spC- cifique de groupements cellulaires, dont la structure fondamentale statique et isolCe, ne semble pas prksenter de diffkrences essentielles avec les autres cellules organiques, voila Cvidemment une question dont le simple Cnonck risquerait de nous entrainer bien au-delh des limites de cette Ctude ! Qu’il nous suffise pour l’instant de considkrer A quel point, d’abord, seule l’e‘tude cyberne‘tique correcte desphe‘nomthes de la perception neuro-psychique peut permettre d’aborder dans leur totale complexitt les problkmes de I’activitC fonctionnelle de nos centres cirkbraux, et comment ce mCme point de vue peut nous permettre de souligner, pressentir et proposer de nouveaux et puissants motifs d’intCrEt A ces rapports du psychique et de l’organique que la biologie et la mtdecine moderne s’efforcent maintenant, de plus en plus, ii examiner physiologique- ment et A traduire thtrapeutiquement.

v. C O M P A R A I S O N S E N T R E L ’ I N F O R M A T I O N C E R E B R A L E E T LES S B Q U E N C E S I N F O R M A T I V E S DES R ~ G U L A T I O N S E N D O C R I N I E N N E S

Dans les auto-rkgulations du systkme endocrinien on est amen6 a kvoquer des dis- positifs homkostasiques, comme celui que rkalise par exemple le fonctionnement hypophysaire dans ses rapports avec les glandes gknitales et la thyroi’de. 11 s’agit d’un vCritable home‘ostat gonadal.

Dks 1950 et au sujet de l’action de diffkrentes doses de rayons X sur l’hypophyse, nous avions CtC amen6 i exposer (devant la SociCtk Francaise de ThCrapeutique et Pharmacodynamie) cette notion que le fonctionnement endocrinien devait Ctre con- siderk comme s’opkrant d’une faCon discontinue, suivant des paliers d’action successifs skpares par des intervalles libres. C’est pendant ces intervalles que la glande recoit les stimulations ou freinations des endocrines qui lui sont reliCes, lesquelles sont mises en action, prkciskment par les activitCs effectrices de la pCriode d’action. C’est ce que

Page 147: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

138 E R N E S T H U A N T

nous avions alors appelk la quant$cation physiologique qui correspond au fait que si l’on analyse les phknomknes dans des intervalles de temps suffisaminent rkduits, une endocrine ne fonctionne qu’a partir d’une certaine sommation des indices affecteurs (lesquels viennent avant tout des endocrines relikes), l’intervalle necessaire B la soni- mation de ces indices par le substrat Ctant un intervalle physiologiquement libre. Un tel point de vue nous avait alors permis de donner une explication satisfaisante de certains rksultats de l’action de doses variables de rayons X sur I’hypophyse et plus encore de l’impossibilitk de toute action fonctionnelle dans certains cas.

Pour le sujet qui nous occupe dans cet exposk, nous pourrons remarquer que la sommation des indices d’oscillation physiologique d’une endocrine, correspond pour les rkcepteurs qui lui sont reliCs, a l’information hormonale effectrice pour l’une, affectrice pour les autres. Mais, dans un cas cornme dans I’autre, le substrat informatif est une kmission de molkcules hormonales ou para-hormonales. Ce substrat est donc infiniment plus dense, plus matkriel, plus massif que les skquences purement CnergCti- ques de l’information ckrkbrale.

Deversk en gknkral dans le courant circulatoire, il subit d’autre part la lenteur relative de ce flux, de sorte que cette action peut apparaitre continue. Mais elle est en rkaliti, elle aussi, discontinue, mais les intervalles de discontinuitk sont en fait extreme- ment plus larges que ceux qui &parent les kmissions photiques ou quantiques suc- cessives, bases de l’information sensorielle du cerveau.

Voici donc deux diffkrences fondamentales : massivitk du substrat, klargissement des intervalles de discontinuitk, qui skparent nettenient l’information endocrinienne de l’information ckrkbrale.

D’autre part, on peut interprkter l’action en retour des rkcepteurs endocriniens d’une premikre endocrine, comme une vkritable retro-action des quanta physiologiques d’action de celle-ci, sur son substrat, A travers I’action regulatrice des recepteurs. De cette rktro-action va dCpendre (voir Fig. 7) la valeur de I’intervalle chronologique

!

, :C,

I --..-__________ 4

Fig. 7. Schtma de la quantification physiologique d’un homtostat endocrinien. A: indice d’action ; R : intervalle libre; Er: endocrine reliee; C1, CZ: autres facteurs causaux.

de discontinuitk. Elle peut Ctre + ou -. I1 nous semble possible de pouvoir comparer, cybernktiquement, cette rktro-action du quantum physiologique d’action de l’endocrine

Page 148: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

INFORMATION SENSORIELLE ET C O N C E P T U E L L E 139

sur l’intervalle libre pendant lequel l’endocrine reqoit son information extrinsbque, A la retro-action de l’information conceptuelle sur l’information sensorielle, telle que nous venons de l’analyser dans l’information cCrCbrale. Mais la encore cette retro- action s’etablit suivant un intervalle chronologique extremement Clargi par rapport a la rCtro-action de l’information conceptuelle du cerveau, puisqu’elle doit en fait s’ela- borer a partir de l’endocrine associte, et la encore par la mediation de lourds agrtgats molkculaires se propageant a des vitesses trbs lentes par rapport a celles des influx electroniques.

De plus, une telle rdtro-action endocrinienne ne fait apparaitre que de nouvelles proprietes chimiques s’inscrivant aussitat dans de nouveaux composts hormonaux, et absolument inadequates, bien entendu, a degager des proprietes de structures qui d’ailleurs ne pourraient &tre perques.

Ainsi, si l’on voulait comparer cybernetiquement, au seul point de vue de la valeur et de l’ordonnancement des sequences informatives, le fonctionnement cerebral et celui d’un homeostat endocrinien, on pourrait dire que celui-ci fonctionne a peu p r b globalement comme un cerveau, mais comme un cerveau extrCmement incomplet et plus encore extr&menient ralenti.

Mais on pourrait retourner le problbme et considerer que les analogies - si gros- sibres et approximatives soient-elles - cybernetiques, que l’on peut relever au point de vue information entre l’homkostat endocrinien et le fonctionnement cerebral n’existent que parce que dans le premier existe, en plus de l’endocrine envisagee en premier lieu, un autre p81e endocrinien lui-mCme rtcepteur-informateur.

En reprenant alors l’analogie en sens inverse et en l’exhaussant au niveau des caractbres infiniment nuances, structures, rapproches et confrontts de l’information cdrebrale avec sa retro-action - elle-m&me informatrice et gCnCratrice de proprittes de structure - du conceptuel sur le sensoriel, ne pourrait-on alors concevoir qu’une telle equilibration dynamique n’est possible que parce que doit exister a cat6 - ou au sein - du neuro-psychisme organique proprement dit, un autre pale recepteur- informateur ‘alimente’ en partie par le premier, mais aussi l’informant et d’autant plus puissamment qu’il est mieux ‘aliment6’ par h i ? Le fait qu’il n’ait pas de support materiel n’enlbve absolument rien a la rigueur de sa n6cessitC cybernktique . . . ne serait-ce pas la encore, a l’issue d’une telle perspective, la place et le r61e de 1’Esprit !

R E S U M E

Dans une skrie d’dtudes sur la biologie et la cybernktique, nous avions tt6 amen6 A montrer comment, dans le comportement general de l’organisme humain, l’infor- mation sensorielle, c’est-8-dire la perception par les centres neuropsychiques de stimulations extrinsbques peut Ctre rapprochee de l’information issue d’un concept. Autrement dit, comment le concept -et tous ses equivalents du domaine ideosensible 4 t a i t susceptible de s’informer dans un transfert hergetique biologiquement percevable. L’Ctude des neuro-enctphalogrammes, en particulier, montre comment I’effort pour percevoir produit les mCmes reactions d’arrCt des ondes cerkbrales de base (rythme a) que la perception d’un flux lumineux hCt8rogbne. Ce r61e des facteurs

Page 149: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

140 E R N E S T H U A N T

psychiques sur 1’activitC Clectrique de base du cerveau se retrouve aussi bien chez l’animal que chez I’homme.

C’est a partir de tels faits qu’il nous a sembli possible de proposer un schCma d’tquilibre cybernitique entre les ondes a et I’energie d’information de la perception sensorielle. Et cette conception nous a mCme permis de fournir un schema de la formation de certaines hallucinations ‘compensatrices’ de disordres pathologiques, comme certaines hallucinations visuelles. Mais 1’intCrCt d’un tel processus d’iquili- bration, c’est de faire comprendre comment une modalit6 purement psychique ou conceptuelle peut fournir, par des modifications de transfert de charges Clectroniques, la retro-action compensatrice qui fait varier l’excitabilitk Clectrique de base du cerveau dans le sens du retablissement d’un Cquilibre perturbC.

La haute spCcificitC de nos rCcepteurs neuro-psychiques serait donc, avant tout, de pouvoir confronter entre elles, de faGon A en faire Cmerger des propriitks formelles, les distributions spatiales et chronologiques des sCquences 6nergCtiques qu’ils recoivent, et rkciproquement de faire correspondre aux Claborations conceptuelles des sCquences CnergCtiques pouvant retentir par voie rCtro-actionnelle sur I’activitC bio-Clectrique du cerveau et les homkostasies qui en dkpendent.

S U M M A R Y

S E N S O R I A L A N D C O N C E P T U A L I N F O R M A T I O N I N T H E B I O - E L E C T R I C A L

A C T I V I T Y O F T H E B R A I N

In a series of studies on biology and cybernetics, we have made an attempt to show how in the general behaviour of the human organism sensorial information, i.e. the perception by the neuropsychical centres of extrinsic stimulations, may be compared with the information arising from a concept. In other words, how the concept -and all its equivalents of the ideosensory field -is susceptible of making inquiries in a biologically perceivable energetic transfer. The study of neuroencephalogranis in particular shows how the effort to perceive produces the same reactions of arrest of the basic brain waves (a-rhythm) as the perception of a heterogeneous flow of light. This influence of psychical factors on the electrical basic activity of the brain is found both in animals and in man.

It is on the basis of these facts that it seemed possible for us to propose a scheme of cybernetic balance between the a-wave and the energy of information of the sensorial perception. And this perception even permitted us to give a scheme of the formation of certain ‘compensatory’ hallucinations of pathological disorders, such as certain visual hallucinations. But the significance of such a process of equilibration resides in demonstrating how a purely psychical or conceptual event can provide, by modifications of transfer of electronic charges, the compensatory feedback which varies the electrical basic excitability of the brain in the sense of restoration of a disturbed balance.

Thus, the high specificity of our neuropsychical receptors will be in the first place to make it possible to compare the spatial and chronological distributions of the

Page 150: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N F O R M A T I O N S E N S O R I E L L E ET C O N C E P T U E L L E 141

energy sequences which they receive and in such a way to bring out their essential properties, and, reciprocally, to establish a correspondence between the conceptual elaborations and the energy sequences which by the feedback route may have a repercussion on the bio-electrical activity of the brain and the homeostases depending on it.

D I S C U S S I O N

SCHOUTEN: Est-ce que j’ai bien compris que vous avez montre au sujet un signal rouge en essayant de le convaincre que ce fut un signal vert?

HUANT: Un signal rouge est bien effectivement montrC au sujet, mais on n’a pas cherchC a le convaincre que c’est un signal vert. Simplement la question lui est poke: ‘n’est-ce pas une lumibre verte?’, pour lui suggCrer le concept du vert en l’opposant B la perception sensorielle du rouge.

SONGAR: The so-called re‘action d’arrEt characterized by a diminution of ampli- tude, but at the same time, by augmentation of the frequency of background activity of the cerebral cortex. This is a result of a desynchronisation due to an activation caused by the brain stem reticular formation. So, I think that it will be better to speak of a rhaction d’hveil instead of arret.

If the main mechanism of this arousal reaction is the multineuronal, polysynaptic organization of the rostra1 part of brain stem reticular formation and if the repetitive stimuli do not cause such a reaction, is it possible to speak of an adaptation state of the brain stem reticular formation?

HUANT: Je suis d’accord pour la realit6 du phknombne dicrit. Mais le terme reaction d’arrct qui vise le fait essentiel de diminution de l’amplitude, parait consacre par l’vsage, du moins en France.

I I me parait en effet trbs possible de parler d’un tel e‘tat d’adaptation.

Page 151: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

1 42

Modkle cybernktique d’une mkmoire active B capacitk d’accueil illimitke

J. L. SAUVAN”

INTRODUCTION

Les problkmes que pose la mCmoire en ce qui concerne son support et surtout son fonctionnement ne semblent pas devoir &tre ClucidCs de sitat par la mCthode expCri- mentale.

Dans de tels cas le chercheur dispose d’un recours qui par une voie dCtournCe r amhe i I’expCrimentation, toujours nCcessaire i la certitude. La mtthode des simu- lateurs, mCthode tprouvte de toujours, cybernktique avant la lettre, constitue ce recours precieux. Les hypothkses de travail appartiennent i la catkgorie des simula- teurs, ce sont des simulateurs intellectuels.

Toute hypothkse concernant le support de mtmoire contient une implication mCta- physique. En effet la construction d’un simulateur exige l’introduction immCdiate des propriCtCs essentielles que l’on estime appartenir i l’objet de la recherche. I1 faut donc afirmer dks l’abord sa position quant i la matirialit6 possible des supports de mkmoire; cette position dCtermine l’idCe que l’on peut se faire de la nature de l’ilme. On voit pourquoi les controverses sont si dpres autour d‘un tel sujet qui ne devrait pas agiter plus de passion que les autres recherches physiologiques.

Le premier souci du spiritualiste sera de rendre son hypothkse incompatible avec la possibilitC d’un support physique. Par exemple en attribuant a la mtmoire humaine la puissance du continu. C’est la position de 1’IngCnieur GCnCral de l’Air Vernotte au symposium Cyberne‘tique et Connaissance (Zurich, 1956-1957). Puisque la mCmoire (et la pensCe en general) est susceptible de contenir les notions d’ensembles aleph I et aleph 2, et qu’en outre il n’est aucun point de ces ensembles que la penste ne puisse atteindre (ce qui est diffkrent de la possibiliti de les atteindre tous) la mCmoire doit nkcessairement participer aux propriCtCs de ce qu’elle a ClaborC et donc avoir la puissance du continu. Cette thkse peut semble-t-il Ctre mise en Cchec.

S’il s’agit au contraire de dtfendre une mdtaphysique hylemorphique, ou une con- ception purement matkrialiste, ou bien s’il s’agit pour un chercheur de refuser par principe toute hypothkse qui lui interdit la poursuite de ses investigations, le simula- teur comportera un support discontinu de la mtmoire, ce qui permet de ramener au monde materiel l’objet mCme de la recherche.

* 1’Adresse actuelle: 43, Boulevard Albert-ler, Antibes (France).

Page 152: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M O D B L E C Y B E R N ~ T I Q U E D’UNE M ~ M O I R E 143

O R G A N I S A T I O N D U D I S C O N T I N U D A N S LES S I M U L A T E U R S

D E M ~ M O I R E

Dans le domaine de l’exploitation logique des engrammes qui, en premibre approxi- mation est celui qui nous intCresse, I’emmagasinement des informations a lieu de faCon analytique. Chaque fait mCmorisC est dissCquC par les capteurs en autant d’CIC- ments qu’il est ntcessaire d’en retrouver au moment de l’extraction hors de la mCmoire. Chaque Clement est mCmorisC pour son propre compte, et il faut en outre mCmoriser la loi de relation, qui doit se manifester lors de l’extraction, loi qui unit les diffkrents ClCments du fait. Par exemple dans un espace occupC par des points mattriels tous identiques, I’existence d’un point en un lieu donnC est un phCnomkne binaire, il y a 18 un point mattriel ou bien il n’y en a pas. Actuellement on mCmorise cel8 sous forme de trois coordonnkes (c’est 8 dire au minimum par trois points pour en dCcrire un seul) et il faut d’une faqon ou d‘une autre mCmoriser la signification exacte de chaque valeur ainsi mtmoriste (ne serait ce qu’en ayant un code qui explique que les coordonnCes seront CnumCrtes dans un sens immuable x, puis y , puis 2).

Ce n’est pas parce qu’elle a une efficacitC remarquable en mathkmatique que cette mtthode doit supplanter toutes les autres dans d’autres domaines. Pour nous elle introduit une Cnorme perte de rendement et elle dissocie des proprittts qu’il faudra a nouveau ptniblement reunir lorsqu’il s’agira de manipuler en bloc des faits, ce qui est une fonction habituelle de la mCmoire. Les grandes ordinatrices sont incapables d’assumer ces fonctions 8 leur rythme habituel.

Les rkalisations spontanees de l’esprit humain, au contraire, ont une forme synthC- tique. Les mots d’une langue peuvent Ctre assimilCs A un point repCrC dans le diction- naire (le rCfCrentie1). Chaque point qui ne peut se manifester que de deux fagons (Ctre prCsent ou Ctre absent) supporte une quantitt tnorme d’information (par exemple un mot tel que “Europe”).

Un dernier mot en ce qui concerne la nature de la mCmoire: on a trks justement fait remarquer que capteurs et effecteurs avaient une physiologie discontinue. 11 serait bien invraisemblable que la “boite noire” qui manipule la penste posskde deux fonctions de transformation du continu en discontinu et vice versa 8 la sortie et a I’entrCe.

H Y P O T H B S E S C O N C E R N A N T L E S S U P P O R T S B I O L O G I Q U E S D ’ E N G R A M M E S

Elles semblent pouvoir se ramener 8 trois : Pendant longtemps les neurones ont paru les seuls

Cltments du systkme nerveux central susceptibles de prendre et de conserver plusieurs Ctats diffkrents. Cette hypothbse semble perdre du terrain. Elle permet de tabler sur au moins 1010 ClCments et 1012 liaisons.

2. Support par les ve‘sicules de Bok. C’est l’hypothbse la plus attrayante. Ces vtsicu- les peuvent prendre deux ttats stables et chacune d’elle est en relation sptciale avec dix conducteurs exactement. Leur nombre serait de 1012 au moins.

Leur nombre est immense et n’a pas CtC CvaluC. Ces molCcules peuvent prendre Cgalement plusieurs Ctats et restituer

1. Support par les neurones.

3. Supportpar les mole‘cules d‘ADN ou bien d’ARN.

Page 153: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

144 J . L . S A U V A N

de I’information. Mais d’une part ces niolCcules ne sont pas “amarrtes” et on ne sait si leurs liaisons avec d’tventuels canaux de transmission seraient permanentes. D’autre part les messages que ces moltcules Cmettraient seraient des radiations et I’on n’a aucune idte de I’articulation qui existerait avec les capteurs et les effecteurs.

Un simulateur de memoire biologique tel que nous le prtsentons peut donc, sans contradiction avec les hypothkses plausibles, exiger un nombre d’C1tments pouvant aller jusqu’a 1014 et au deli.

L A N O T I O N D’ACTE

Le fonctionnement d’une machine peut toujours se dCcrire par une suite d’tvknements de structure identique s’engrenant ttroitement les uns les autres. Ce sont les actes. L’acte est vtritablement la “moltcule” du fonctionnement d’un systkme.

On appellera acte e‘lkmentaire la triade suivante : (a) Etat initial de l’environnement tel que le systkme considtrC le percoit par ses

capteurs. C’est la situation de depart (SD). (b) Le mode d’intervention de l’ensemble des effecteurs sur cet environnement. C’est

f’action (a). (c) Etat final (transform6 par I’action) de l’environnement tel qu’il est a nouveau

perw par le systkme. C’est la situation resultante (SR) . Les situations correspondent i des ttats prtcis de l’ensemble des capteurs. Les

actions correspondent a des commandes prtcises de l’ensemble des effecteurs. L’acte complexe (AC) sera dtfini ulttrieurement.

L’acte peut Ctre dtfini‘ du fait que les capteurs et les effecteurs sont des organes qui fonctionnent de facon discontinue. On s’apeqoit que :

1. La situation rtsultante de chaque acte constitue la situation de depart de I’acte suivant. Le fonctionnement est donc entikrement dtcrit par une sorte de “spirale” in- interrompu d’actes s’engendrant les uns les autres ; elle constitue l’histoire “objective” de la machine (c’est a dire I’histoire vue du deliors).

2. Le nombre des capteurs et des effecteurs ainsi que celui de leurs ttats ttant fini, le nombre des actes possibles est kgalement fini. S’il s’agit de systkmes binsires et si c est le nombre de capteurs et o le nombre d’effecteurs, le nombre de tous les actes possibles est 2c x 2 O x 2c. Ce nombre devient vite non seulement trks grand mais vtritablement immense dks que o et c augmentent, m@me modtrtment. Les projets de rkalisation exigent la mise en oeuvre de proctdts d’tconomie des Clements.

3. I1 ne peut exister de notion de temps intrinskque a cette mtmoire; elle est rem- placke par la notion de succession et par tous les renseignements qui peuvent en Ctre dtduits. Le “temps d’horloge” peut Ctre rtintroduit sous forme de la perception d’un tvtnement exttrieur qui vient alors dtcouper temporellement la suite des actes.

P R I N C I P E S B L ~ M E N T A I R E S D E C O N S T R U C T I O N D’UNE M ~ M O I R E A C T I V E

Dans une telle mtnioire, toutes les informations (et elles peuvent &tre fort nombreuses) concernant un acte sont mkmorisees sur un unique tltment binaire. C’est dire qu’une

Page 154: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M O D ~ L E C Y B E R N E T I Q U E D’UNE M E M O I R E I45

telle mCmoire doit comporter 2 C x 20 x 2c ClCments binaires, destinCs A supporter les engramnies. Tant que l’acte correspondant n’a pas eu lieu 1’ClCment binaire reste en position 0, dPs qu’il a CtC rCalisC 1’ClCment passe dkfinitivement en position 1. Dans ce dernier cas la reiteration de l’acte ne produira aucun changement dans l’ensemble des ClCments binaires.

Pour rkaliser une telle memoire il faut ranger la collection prCvue des Cltments binaires en une matrice A trois dimensions (Fig. 1). Les dimensions correspondant l’une A la situation de depart, l’autre A l’action, la dernikre A la situation rbultante.

1 1 lo!

d X O L ‘ O H ,.,l ,oo 01

P

oo $loo 01 ell0 0 11 0

0 0 P I ACTE 11-10-10

Fig. 1

Chaque situation rksultante, dans cette matrice est liee a tous les ClCinents apparte- nant au sous-ensemble caractiris6 par une situation de dtpart identique a cette situa- tion rCsultante.

Un Clement passe en position 1 quand les trois facteurs de l’acte qu’il reprCsente sont rCunis. Inversement on sait en “interrogeant” un ClCment binaire, en une seule opkration, si l’acte correspondant a eu lieu et l’on a la description analytique de tous les facteurs de cet acte.

L’appareil constituk par la matrice des Cltments de mCmoire accompagnCe du systkme d’extraction des informations (qui ne sera pas dCcrit) et des canaux d’infor- mation centripktes et centrifuges est appelC par analogie biologique “centre d’intCgra- tion” (CI) (ou bien centre d’inscription.) Un seul Clement en effet supporte des infor- mations concernant des Cvenements disparates, perceptions et ordres d’action. On

Page 155: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

146 J. L. S A U V A N

constate en outre que le fonctionnement crCe une boucle rdgulatrice passant par l’environnement, la mCmoire, le systkme effecteur et A nouveau l’environnement.

S T R U C T U R A T I O N A U S E C O N D D E C R B

La nCcessitC de rCduire le nombre d’C1Cments la rend indispensable. Cette Cconomie peut s’envisager simultantment de plusieurs faCons : en branchant (comme en phy- siologie) plusieurs capteurs sur un mCme canal d’information et en faisant en sorte qu’un seul canal puisse commander de nombreux effecteurs. Cela ne sera pas dicrit ici.

Un troisikme moyen consiste B juxtaposer des centres d’intkgration disposant chacun de peu de canaux. Alors que vingt canaux de capteurs et vingt d’effecteurs exigeraient un unique CI de plusieurs milliers de milliards d’ClCments, cinq CT disposant globale- ment du mCme nombre de canaux n’exigeraient que 23,000 Climents. On peut donc jouer sur la rkpartition des canaux selon les rCsultats recherches.

Les centres d’association (CA) sont destines a solidariser les diffkrents CI. 11s en- registrent la simultantitk des situations et celle des actions dans les CI qu’ils associent. On comprend que les capteurs et les effecteurs des diffirents CI ne sont pas de mCme nature d’un CI i l’autre. Soient les CI no. 2 et no. 3. Le premier posskde m capteurs binaires, le second n. Le CA correspondant aura mn ClCments qui lui permettront d’enregistrer n’importe quelle combinaison possible des capteurs de 2 et de 3. De mCme en ce qui concerne les effecteurs.

Un certain environnement correspond i un certain groupe de “situations” dans l’ensemble des CI. Ce groupe est une situation complexe: SC. I1 y a de mCme des actions complexes aC et des actes complexes: AC.

L’existence des CA provient de la nicessiti o i ~ Yon se trouve d’interdire l’extraction de n’importe quelle combinaison de situations ClCmentaires qui ne correspondrait nullement A une situation complexe rtellement enregistrke. Dans le dispositif d’ex- traction l’apparition des situations dCjia memoristes est asservie a l’inscription de leur coexistence dans les CA.

Dans une telle conception chaque CI correspond a un systkme physiologique en boucle cybernCtique (systkme audition-phonation par exemple). Dans cette perspective on peut se demander si tout systkme de capteur physiologique n’est pas obligatoirement associC B un systkme effecteur.

Cette mCmoire fonctionne par temps successifs cor- respondant a la durCe d’un acte. A chaque temps s’Ctablit dans les CA une liaison “horizontale” entre les divers CI. D’un temps a l’autre, dans chaque C1 s’Ctablit une liaison “verticale” d’un acte tlementaire au suivant. Ce “carroyage” de liaisons horizon- tales et verticales constitue l’histoire du systkme. I1 doit nkcessairement Ctre respect6 pendant les temps d’extraction.

Structuration des engrammes.

FONCTIONNEMENT

Inscription. Chaque temps d’inscription correspondant a la durie d’un acte est divise en trois parties destinies successivement A l’accueil des informations concernant les

Page 156: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M O D B L E C Y B E R N ~ T I Q U E D’UNE M ~ M O I R E 147

trois facteurs de l’acte : situation de dtpart, action, situation rtsultante. SimultanCment les inscriptions se font dans les CA.

Remkmorisation ou extraction. La manibre la plus efficace d’interroger une telle mtmoire est de poser la question suivante: Quelle est la suite d’actes complexes qui permet de passer de la situation complexe A (gtniralement la situation actuelle) a la situation complexe N. I1 existe gtnkralement plusieurs chemins plus ou moins distincts correspondant a des actes qui unissent ces deux situations (Fig. 2).

A

N Fig. 2

Le processus rtel d’extraction est trbs complexe. On peut le schtmatiser ainsi: en appliquant une tension sur l’tltment binaire reprtsentatif de A et en la recueillant sur l’tltment binaire reprtsentatif de N on dessine Clectriquement la suite des trans- formations qui mbnent de A a N. En pratique il est plus commode de proctder par rtcurrence en remontant de N vers A. Cette extraction fournit les renseignements suivants: (a) stratCgie la plus courte pour transformer A en N ; (6) toute strattgie dans laquelle doit figurer obligatoirement (ou inversement doit &tre tvitt obligatoirement) telle ou telle situation intermtdiaire ; (c ) description de toutes les situations inter- mtdiaires; (d) liste et description dans l’ordre de toutes les actions ntcessaires cette transformation.

Prtcisons, s’il le fallait, que ces strattgies ne sont “les meilleures” que par rapport a 1’expCrience et aux capacitts propres de la mtmoire. I1 ne s’agit pas des meilleures strategies “possibles” dans l’absolu. Nous ne les connaissons d’ailleurs pas nous-mCmes. On verra cependant plus loin que cette mCmoire est capable d’amtliorer d’une faCon spontante, et cela dans des proportions notables, les performances acquises par simple expkrience. On notera que: (a) en interrogeant un seul ClCment binaire on a immtdiate-

Page 157: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

148 J . L . S A U V A N

ment comme rtponse l’ensemble des informations qui constituent les coordonntes de cet tltment, informations ayant trait A deux situations et a une action; (b) il n’y a pas de chronologie inscrite. Le reptrage des actes est relatif ti d’autres actes. Un systbme capteur peut, en mtmorisant les indications d’une horloge, introduire des repbres teinporels dans la chaine des actes.

A P P A R I T I O N S D E P R O P R I B T ~ S N O U V E L L E S

1. Apprentissuge. Soit Cl la chaine d’actes effectuts au hasard qui relie une situation complexe A a une autre T. Si cette chaine prtsente un circuit, c’est a dire si elle repasse par une meme situation complexe intermtdiaire (Fig. 3), lorsque le syst&me tentera un

A B

U

Apprmhssirge type 1 - , , z -------

efghrk lmn 13 14 15 Cycle feerme

T

Fig. 3

second essai pour transformer A en T le passage par le circuit se trouvera elimint par le systbme d’extraction. La strattgie s’en trouvera allegee. C’est le mode d’apprentis- sage No. 1.

Soit C2 une autre chaine d’actes liant B a U et recoupant la chaine Cl en deux points (au niveau de deux situations) 9d et 1%. Toute strattgie tendant ti transformer A en T aura dtsormais le choix entre les deux chemins qui relient 9d et 1%. C’est le chemin le plus court qui sera adoptt, la stratigie s’en trouvera A nouveau simplifite: c’est le mode d’apprentissage No. 2.

Cette mtmoire est susceptible de commettre des erreurs, 2. CrPution d’engrammes.

Page 158: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M O D ~ L E C Y B E R N ~ T I Q U E D’UNE M ~ M O I K E 1 49

mais elles sont fructueuses. C’est mEme une de ses propriCtCs principales. Expliquons nous.

Soient en deux actes complexes composCs respectivement des actes ClCmentaires suivants contenus dans six centres d’intkgration :

a : A - B - C - D - E - F. b: G - B - H - I - E - J.

On voit qu’ils ont en commun deux actes Clementaires Bet E d a m des CI non contigus (Fig. 4).

Fig. 4

Dans certains cas, pendant la pCriode d’extraction il peut se “construire” les actes complexes suivants qui n’ont jamais 6th mCmorisCs :

C: A - B - H - I - E - F . d : G - B - C - D - E - J .

Ces derniers actes sont donc imaginCs. I1 peut se passer a ce moment deux choses. Cet acte complexe synthitique ne peut s’intkgrer dans l’expCrience de la mtmoire,

c’est A dire qu’il ne trouve pas sa place dam le carroyage des relations entre Cltments. C’est le cas le plus frCquent et le systkme d’extraction 1’Climine purement et simple- ment.

I1 peut au contraire s’intigrer parfaitement dans l’expkrience de la mCmoire, celle-ci peut lui attribuer une Cvolution fictive conforme 9 ses informations antkrieures, et

Page 159: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

150 J. L. S A U V A N

l’acte peut se trouver compatible avec les lois de l’environnement du systkme. La transformation de cet acte peut se faire de telle facon (dans la mCmoire) que cela aboutisse a un acte complexe reel.

Ce processus correspond a quatre eventualit& : ( a ) Intuition. L’acte est rCellement possible, il est mCme susceptible d’ctre realise

par le systbme ulterieurement. Celui-ci n’en avait simplement pas l’expkrience. Le systkme a eu l’intuition de l’existence de cet acte.

Les deux situations qui caracterisent l’acte n’existent pas dans l’en- vironnement du systkme. Mais ce systkme, en appliquant la stratCgie indiquie par la memoire se trouve capable de faire apparaitre dans l’environnement l’une ou l’autre de ces situations. 11 a invent6 et rCalisC son invention.

(c) Zde‘al. Les situations, tout en n’offrant aucune incompatibilite avec les lois de l’environnement, n’existent pas et ne peuvent @tre crCCes. Le systkme peut mCme les rechercher. 11 poursuit un idCal.

Dans des cas tout a fait particuliers les situations ne sont pas compatibles avec les lois de I’environnement. Cependant elles peuvent s’inserer dans la mtmoire, y evoluer, sans amener aucun blocage. De ce fait ces situations imaginkes crCent dans la memoire des relations inattendues entre des actes apparemment sans lien entre eux. Ces situations servent de chaine intermkdiaire dans une chaine creCe par elles, chaine qui lie ces actes. Un lien apparemment logique, au moins non contra- dictoire, se trouve ainsi Ctabli entre deux actes sans correlation possible par d’autres moyens. Le systbme a construit un “imaginaire”.

I1 est bien entendu que rien dans ces deductions n’autorise a Cvoquer 1’Cmergence d’une conscience quelconque. Ce problbme relkve d’une autre catkgorie de simula- teurs.

Ici il s’agit simplement de possibilitis combinatoires qui font apparaitre certaines proprietks de la pensee. I1 y a peut-Ctre a ce sujet une Cchelle de valeurs a reviser.

3. Capacite‘ d’accueil illimite‘e. On a vu pourquoi les matrices d’C1Cments binaires recensent toutes les combinaisons possibles d’information. De mCme tous les types de correlation entre centres d’integration sont recensCs dans les centres d’association. Toutes les liaisons dans le temps existent enfin dans chaque CI. On ne peut donc concevoir un acte qui d’une part n’ait sa place dans une telle mCmoire et d’autre part qui ne voit des relations etablies avec l’acte qui l’a prCcCdC et celui qui lui succkde.

Quelle que soit la durCe de fonctionnement de ce systkme, sa mtmoire accueillera indifiniment les informations successives. Deux actes identiques ne seront Cvidemment pas differencits dans le temps puisque ce sera le m@me element binaire qui servira de support, mais quand le second acte aura eu lieu, il etablira de nouvelles liaisons entre cet Clement et de nouveaux ClCments dependant du nouveau “contexte”. Ces diverses liaisons caracterisent le fait que cet acte a Cte rCpetC a plusieurs reprises. Dans le cas contraire l’existence successive de deux actes identiques ne sera pas revClCe a l’extrac- tion. C’est bien ce qui se passe dans la realit6 biologique oh deux situations identiques ne peuvent &re differenciees que par leur contexte sinon apparaissent les illusions de la pensee.

Comme consequence lointaine de ceci remarquons que si les lois de I’environnement

(b) Invention.

( d ) Imaginaires.

Page 160: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M O D B L E C Y B E R N ~ T I Q U E D’UNE M B M O I R E 151

se modifient, et se modifient trop souvent, les liaisons entre tltments se multiplient et le contenu de la mCmoire devient inutilisable. N’importe quoi semble pouvoir survenir. La mCmoire, l’image de son univers devient irrationnelle, absurde, incohkrente.

s I G N I F I c A T I o N P R O F o N D E D E s E N s E M B L E s DV L 15 M E N T s

Dans une telle mimoire un grand nombre d’tlkments binaires sont destinCs 5 rester inutilisb. Dans certains cas, comme dans un dispositif de traduction automatique par exemple, o i ~ le contenu de la mCmoire est destinC a peu Cvoluer, les CI seraient de vCritables dictionnaires a peu prks immuables, la plupart des Clements destinCs 5 rester en position 0 seraient exclus de construction.

Dans le cas gCnCral c’est la prCsence apparemment inutile de nombreux ClCments destinks a rester a l’Ctat 0 qui donne toute leur signification aux dements basculCs 5 l’Ctat 1. D’ailleurs on ne connait pas & l’avance ceux des ClCments destines & basculer et ceux qui ne le feront pas.

Cette surabondance d’C1Cments permet au systkme de fonctionner correctement quelles que soient les lois de l’univers dans lequel il sera plongC. I1 ne s’agit pas ici de fabriquer une machine comportement prCfabriquC, programmC pour tel ou tel en- vironnement. Elle peut s’adapter d n’importe quelles circonstances (sauf destruction). On retrouve la une propriCtC des systkmes biologiques les plus CvoluCs. I1 ne s’agit pas d’homCostzsie, car celle-ci, pour le systkme nerveux central est une fonction secondaire.

D’autre part on n’attire pas assez l’attention sur ce fait en thtorie de l’information, que ce n’est pas l’information que l’on reGoit qui est importante (la matCrialitC du message). Ce qui est rCellement significatif c’est que I’arrivCe de ce message affirme la non-existence de toute autre possibilitt. Si le “code” utilisC ne comporte que deux ClCments, un message a comme seul valeur celle de nier une possibilite. Si le “code” a mille ClCments I’arrivte du message nie 999 autres possibilitts. C’est cela qui fait la vraie signification du message. Avant d’etre transmise l’information appartient d une sorte de nuage d’un certain nombre d’informations toutes possibles. Dbs que l’infor- mation est parvenue a celui A laquelle elle est destinCe (celui qui est capable de la comprendre) le nuage est scindC en deux groupes s’excluant Sun l’autre. Le groupe composC de cette information unique pour lequel le fait potentiel correspondant devient actuel, et le groupe de tous les autres faits qui eux de potentiels s’bvanouissent dans I’inexistant.

Dans la mCmoire ici dtcrite la seule constatation de 1’Ctat 1 sur un ClCment binaire (dont on voit bien qu’en soi il ne veut rien dire) permettra immidiatement d’extraire un nombre d’informations directement proportionnel au nombre d’C1Cments qui figu- rent dans la matrice a laquelle il appartient. Rien de cela n’est d’ailleurs en contra- diction avec la thCorie de l’information.

En somme ce modkle de mCmoire est caractCris.5 par: (I) la concentration d’un nombre important d’informations sur un unique ClCment binaire; (2) la projection interne de l’histoire du systkme selon une suite ininterrompue d’engrammes ; (3) une capacitt d’accueil illimitCe; (4 ) la prCCminence de la recherche d’efficacitt? dans

Page 161: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

152 J. L. S A U V A N

l’action sur la fid6litt aux details intermkdiaires (apprentissage); (5) la possibilitt d’erreurs gCn6ratrices d’imagination, d’oG : intuition, invention, idCalisation.

Cette mCmoire reCoit ses informations (comme ses modbles psycho-biologiques) par les moyens du hasard, du jeu et de 1’Cducation. De ce fait les expiriences en apparence les plus inutiles s’avbrent finalement indispensables i l’acquisition de son efficacite.

R E S U M E

On peut toujours dkcouper le fonctionnement d’un systbme actif en une suite d’actes de structure identique. Un acte est I’ensemble constituC par: (a) un certain i tat de l’environnement, (b) l’action que le systbme entreprend sur lui, (c) le nouvel Ctat de l’environnement qui en rCsulte.

Un systbme ayant un nombre fini de capteurs et d’effecteurs ne peut participer qu’a un nombre.fini d’actes. La mCmoire dCcrite comporte ce nombre mCme d’C1Cments binaires. Des liaisons sont Ctablies entre les actes successifs.

Les ClCments binaires soiit rCpartis entre diverses matrices dont chacune constitue un centre d’intigration. Les centres d’indgration sont liCs par des centres d’associa- tion.

La mCmoire se structure peu B peu de telle faCon qu’apparaissent les propriCtCs suivantes : intuition - invention - idCalisation - crCation d”‘imaginaires”. Ces propriCtCs dCrivent d’une propriCtC plus gCnCrale d’imagination. Lorsqu’on lui assigne un but la mCmoire extrait spontankment de son experience la stratCgie optimale correspondante.

Toutes ces operations se font en un nombre trbs limit6 de commutations electroni- ques, le rtsultat est extrait d’une facon quasi instantanke.

Quelle que soit la durCe de fonctionnement et la longueur de l’histoire du systbme qui l’emploie, la memoire est apte B accueillir toutes les informations qui lui parvien- nent.

S U M M A R Y

CYBERNETIC MODEL OF AN ACTIVE MEMORY WITH UNLIMITED RECEPTION CAPACITY

It is always possible to divide the functioning of an active system into a series of acts of identical structure. An act is the whole formed by: (a) a certain state of environ- ment, (b) the action which the system exercises on it, (c) the new state of environment resulting from it.

A system having a finite number of captors and effectors can participate only in a finite number of acts. The memory described comprises this number of binary elements. Links are established between the successive acts.

The binary elements are distributed over various matrices, each of which constitutes a centre of integration. The ccntres of integration are linked together by centres of association.

The structure of the memory is developed little by little, in such a way that the following properties appear: intuition, invention, idealization, creation of ‘fancies’.

Page 162: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

M O D B L E C Y B E R N ~ T I Q U E D’UNE M ~ M O I R E 153

These properties derive from a more general property of imagination. When memory is assigned an objective, it spontaneously extracts from its experience the corresponding optimal strategy.

All these operations take place in a very limited number of electronic commutations, the result is extracted almost instantaneously.

Whatever the duration of functioning and the length of the history of the system employing it, the memory is capable of collecting all the information which it receives.

Page 163: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

154

Preliminary Notes on a Functional Scheme of Human Thought

N. STANOULOV*

The present report analyses certain functional features and aspects of human thought, ensuing from familiar works accessible to the author on cybernetics in general and the theory of information in particular. A possible functional scheme of the process of thinking has been set up by means of appropriate generalizations and syntheses. The author does not claim to give a physiological explanation of this most complex process in animate matter. In fact, he is not a physiologist. Despite the fact that the functional scheme synthesized in the above manner contains no new elements, the author believes that the ideas expounded in the report and the approach to the problem could prove useful from a theoretical and experimental point of view both for a chronological examination of the process of formation and development of the reflective brain apparatus in man and for a concrete study of the existing apparatus.

The brain, the cerebral cortex in particular, is the seat of intellectual human activity. Using different experimental methods and devices, physiology is trying to establish the structural features of the cerebral cortex and the link of the various structures to the activities of the brain, to the functional purpose of the various sections of the cortex. An enormous number of physiological phenomena and anomalies has been accumulated and described in the course of many years. Existing knowledge has been systematized. The correlations between the numerous facts have been duly noted, and so have the methods of affecting them. All this helps in the correct differentiation and diagnosis and in the more efficient treatment of the various deviations from normal brain activities. However, such important organic categories as the mechanism of memory and the mechanism of the physiological process of thinking have remained an enigma. A probable reason for this is the lack of nerve cells (receptors) in the human body (or our inability to detect them) sensitive to and perceiving consciousness itself and the process of thinking, as well as corresponding executive cells (effectors) whose function and analysis could supply information about specific material pro- cesses which take place in the brain and in the cerebral cortex, and which could help in locating their bearers.

The study of the activity of the brain has been, in recent years, the object of other branches of science such as cybernetics and the branch dealing with the theory of information. As we know, one of the virtues of cybernetics is the fact that it advances a method of scientific study of complex systems (processes and phenomena), biological

* Present address: UI. k. Peitchinovitch, Sofia (Bulgaria).

Page 164: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

SCHEME O F H U M A N T H O U G H T 155

ones included. This is the method of analogies and models whose operation is iso- morphic (or homomorphic) to that of the system studied. Cybernetics studied those of the homomorphic and isomorphic systems in nature and in society whose mani- festations are the same (with a lesser or greater degree of approximation), irrespective of their heterogeneous structure. Consequently, the isomorphic model reproduces certain aspects or laws governing the phenomenon studied. To put it in a figurative way, one of the principal tasks of cybernetics is, by establishing the ‘diagnosis’ of the living being - with the help of physiology - to explain the mechanism of its actions through model structures.

An essential feature of the nervous system and the brain is the signal character of their functioning. This feature refers in a general way to the physiological regulators as well (Jones, 1961). The nerve impulse signals are the bearers of diverse information which governs the activity of the organism and expresses its state and interaction with the particular environment. Reception, preservation (retention), transformation and transmission of information take place in the brain. Science today knows very little about the manner in which information moves in the brain, and about the nature and mechanism of the active transformation of incoming data into outgoing ones. A number of schemes has been advanced with respect to the process of thinking in man and to an analysis of thinking. The process of thinking has been analysed from the point of view of the information theory (Goldman, 1958). The idealized schemes shown there (Fig. 1) give, as a matter of principle, a clear picture of the signal processes and

Correlation with exis- - ting informofion a t Input hand which ensures Slwu7lS smoothing (reduction __* of errorland pre-

diction

Effects of message A

(i.e.predicted and ba- signal of sed on that message) messaae A

I

I ,

Information at hand

Fig. 1 . A picture of the signal processes and of the functional transformations in the process of thinking.

of the functional transformations in the process of thinking. The homeostat of Ashby (1958) makes it possible to obtain an idea about the functioning of the brain. The operation of its mechanism simply aims at ensuring the preservation of the inner medium and is analogous to that of the cerebellum of an animal. The functional brain model of Blum (1958) represents a detailed dynamic cybernetic system whose physical nature is based on the problem of equilibrium (i.e. maintenance) and of non-equilibri- um (i.e. violation) of the oscillations in a number of systems, on the basis of infor- mation furnished by the environment.

The following functional signal-flow scheme of the process of scientific thinking (Fig. 2) has been drawn up on the basis of the theory of information and of certain ideas expounded in Information Theory (Goldman, 1958) and elsewhere. We shall References p. 159

Page 165: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

156 N. S T A N O U L O V

Memory j

Percep-

lions (infernal

exterml)

from block 6

tations 1 thinking

I

oclion (method. sfmtegy, cri terion)

I I 4 6 lo block I

+ Selection I - inferences to block 3 (smoolhing, pre-

dictionl - -

Reoclion (assessmen1 of the results of previous inferences *----

Fig. 2. The signal-flow scheme of the process of scientific thinking.

confine ourselves here to presenting and explaining the scheme without dwelling on the emergence, development and inheritance, during the many thousands of years of evolution of animate matter, of the various elements taking part in the functional scheme. The scheme contains the following blocks :

1. block of perceptions (internal and external); 2. block of limitations of thinking; 3. block of memory (the assemblage of ideas, images, notions etc.); 4. block of selection of information (smoothing, prediction) ; 5. block of programme (criterion); 6 . block of inferences; 7. block of reaction (assessment of former inferences). The basic parts of the scheme are the block of memory 3 and the complex of blocks

4, 5 and 7, which has been given the name of ‘concrete thinking’. Man comes into contact with his environment by means of his organs of perception.

Ideas, images and notions concerning external objects, phenomena or their parts are accumulated in the memory during the various activities of the living organism, Let us assume, for the sake of convenience, that every idea, notion, etc. is preserved in the memory in a coded form, i.e. in the form of a separate message consisting of a particular type of signal or group of signals, such an assumption being a plausible one. Other messages are also preserved in block 3: subconscious, instinctive and others, which are isomorphic to the internal perceptions. Most of the messages re- tained in the memory are subject to a great variety of limitations related to probability and logic, causal and functional limitations, to certain regularities, arrangements, etc., part of which are isomorphic to the limitations existing in the environment such as causal and functional ones, while others are worked out in the process of human practice (related to probability and knowledge, such as written and spoken language). Of course, when man considers a certain problem, he deals for a certain period of

Page 166: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

SCHEME O F H U M A N T H O U G H T 157

time with some concrete subassemblage of messages, though this does not change the community of ties between blocks 1, 2 and 3. The manner in which the numerous messages are retained in the memory is not important in this instance either. Block 3 is sui gencris generator of messages and, in the assemblage of a practically infinite number of messages, becomes a genuine generator of noise. The function of block 4, which is in constant contact with block 3, during the process of thinking, is to select one message or combination of appropriately related messages from among the many messages in 3 and to transmit it to the exit - block 6. To the best of our knowledge, the concept of function or capacity of selection has been introduced by Ashby (Shannon and McCarthy, 1956). In the words of its author it is not impossible that what we call ‘mental capacities’ is equivalent to ‘capacity for suitable selection’ (Ashby, 1959). Hence the capacity of selecting a particular message or a combination of messages is a process of temporary noise attenuation of all remaining messages. This mental function is on the one hand affected by block 5, as is the case with the programme-plan of selection, initinl data, scientific method, strategy and others. This block is connected with blocks 3, 1 and 2 respectively, and in a great number of cases it is connected with certain experiences, emotional states, past experience, and the ability to use this experience. One should not rule out the possibility that the content (in its main lines at least) of block 5 may be dictated by block 4 (see the dotted line in the scheme). It is likewise evident that the absence of block 5 will in turn transform the function of selection into a generator of messages, i.e. it will make block 4 superfluous, as in this instance its functions will be the same as those of block 3. On the other hand, the function of selection is affected by former inferences through block 7. It has been established that a particular inference may be accompanied by taking a decision (execution of a reaction such as a muscular one), in which case block 6 is of necessity connected with the block of external perceptions 1, or it may not be accompanied by taking a decision. In the latter instance, there is only a mental analysis and assessment of the inference and block 6 is not connected with memory. Consequently, to express all that in the language of the information theory, block 4 effects what is known as prediction as well, i.e. the function of selection (of the successive inference) is affected by the assessments (reactions) of the preceding inferences. Both types of inferences are absolutely indispensable to man; the first one seems to be more closely related to the existence of the organism and the second one to its intellectual activities. However, they have had a varying significance during the different evolutionary stages of develop- ment of mankind : the performance of reactions was of vital importance to primitive man, whereas today, against the background of an increasing automation of industrial processes, it is the ‘consideration of hypotheses without taking decisions’ (Poletayev, 1960) which enjoys a priority. Blocks 5 and 7 have a two-way connection; the reaction takes place on the basis of the plan of mental activities adopted. The latter may in turn undergo a certain change or correction in accordance with the information available in block 7. The feed-back channel has been marked by a dotted indicator in the scheme presented by Fig. 2. In their essence, the inferences (block 6) are a process- ing of the information contained in the mental messages, which has been stocked in the memory, by means of the complex of ‘concrete thinking’. Rrfrrences p . 159

Page 167: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

158 N. STANOULOV

The described functional scheme of the process of thinking in man contains blocks whose functions can be reproduced in practice (in smaller or greater detail) at the present state o€ technology. In point of fact it is not difficult to accept the separate blocks of the scheme as the familiar units of a calculating machine connected in a specific manner. For instance, when it comes to digital computers, blocks 1 and 2 represent the input unit with the initial data, block 3 is the storage unit, block 4 a type of arithmetical unit - processing of input information, block 5 programme of action, block 6 output unit with outgoing (processed) information, and block 7 is the control unit. In the case of an analogue computer, block 3 may be a noise generator (a noise valve, for instance), while block 4 may be a combination of electric filters re-tuned appropriately. This scheme can likewise be likened to a machine controlling a complex industrial process.

Blocks 4, 5 and 7 will present considerable difficulties in modelling the process of thinking, block 4 in particular. This makes it necessary to answer the following questions : what is the optimum selection, what determines its nonambiguity, how can the signal be separated efficiently from the noise in block 4 under conditions of non-linear superposition of signal and noise, are the separate selections dependent or not, etc. These and other problems of this kind have been treated in detail (Goldman, 1958) and we need not dwell on them here.

The objection can be made that a strictly defined model has been advanced for explaining a physiological phenomenon which is of a statistical character (Moles, 1958). True enough, disrupting the activity of any of the blocks of the functional scheme described (blocks 3, 4, 5 and 7 in particular) will affect the operation of the scheme appreciably. On the contrary, the destruction of even one-half the total number of brain cells (assumed to be about 20,000 million) will not affect the quality and quantity of mental activities in an appreciable manner (Moles, 1958). The remarkable capacity of the brain (composed of thousands of millions of cells) to act with precision regardless of accidental disturbances or errors of each one of the cells, can be inter- preted to a great extent on the strength of the assertion that - at least as far as the highest intellectual manifestations are concerned - there exist no strictly defined and unchanging functional fields in the brain, the cerebral cortex respectively, but a great number of duplicating and interchangeable nerve structures which are connected in a variety of ways with the functions performed by them. Of course, it could be assumed that each block of the functional scheme described has been duplicated the necessary number of times so that any disruption of the operation of any one of them is followed by the inclusion of one of its reserves without any apparent disturbance in the operation of the scheme as a whole. However, such a complication of the scheme does not affect its operation in any way, though it brings the model advanced more closely to the process of thinking in a structural sense as well.

The scheme indicated can hardly exhaust the whole aggregate of ties which exist or must exist among the different functional blocks. Suffice it to indicate, for instance, that as a matter of fact block 5 is indissolubly linked with block 2 and could even be incorporated in it and blocks 2 and 5 are part of the memory. However, the manner of connection adopted aims at giving a clear presentation of the dynamics of the

Page 168: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S C H E M E O F H U M A N T H O U G H T 159

general scheme. This complex interconnection between the individual blocks seems to confirm, in principle at least, the interchangeability and capacity of duplication typical of the operation of the different nerve structures which were mentioned above.

Of course, the study of the scheme described cannot be an end in itself. It must be carried out in very close connection with the course of the process of thinking in man and must serve the purpose of its explanation. This, however, requires the solution of a number of very difficult problems, some of which have been mentioned briefly. One of the fundamental problems in discovering the mechanism of thinking is the discovery of the methods of coding information in the brain and its distribution along the cerebral cortex. Most naturally this can be greatly aided by research aimed at the solution of such important problems as explaining the mechanism of operation of the great variety of physiological regulators in living organisms, and by attempts at simulating (by purely technical models or by combined methods) the nerve and cere- bral processes in animals and in man.

It goes without saying that any model whose aim is to explain certain aspects of the process of thinking in man can be used in a great variety of experiments and researches in many fields of human science, All depends on how blocks 2 , 3 and 5 will be selected, while the remaining blocks of the scheme can be considered as common ones, in the majority of cases at least. Certain small adjustments of this scheme may prove useful in explaining certain phenomena in the field of medicine, though this is beyond the scope of the present paper.

S U M M A R Y

Cybernetics and information theory are applied to some processes of the human thought. Based in generally on this application it is proposed one signal-flow scheme for the realisation of the scientific thought activity consisting of suitable connected functional blocks through direct and feedback chains. One can use the proposed functional scheme for designing and model investigations in scientific research, syl- logisms and solution of various problems in medicine, mathematics, engineering, etc.

REFERENCES

ASHBY, W. Ross, (1958); L’homeostasie. L’dre atomique. Geneve, RenC Kister, 8, 121-123. ASHBY, W. Ross, (1959); Introduction to Cybernetics. London, Chapman and Hall. BLUM, G., (1958); Modele des fonctions cerkbrales. L’Pre atomique. Geneve, Rene Kister, 8, 126-127. GOLDMAN, S., (1958); Information Theory. London, Constable and Co. JONES, V. W., (1961); Certain properties of physiological regulators. Proceedings 1st lnternafional

MOLES, A. A., (1958); La cybernetique est une resolution secrete. L’Bre atomique. Gentve, Rene

POLETAYEV, J. A., (1960); Signal - On Certain Concepts in Cybernetics. Sofia, Tehnika Publishing

SHANNON, C. E., AND MCCARTHY, (1956); Automata Studies. Princeton, University Press.

Congress of I.F.A.C., 2, 950-961 (in Russian).

Kister, 8, 5-9.

House (in Bulgarian, translation from Russian).

Page 169: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

160

Histology, Histonomy, Histologic

V. BRAITENBERG

Centro di Cibernetica del Consiglio Nazionale delle Ricerche, Istituto di Fisica Teorica, Universitd di Napoli (Italy)

Cajal’s Histology ofthe Nervous System (191 1) is very likely to contain more informa- tion about animal brains than any other text of neurology (except perhaps Ariens Kappers’ Comparative Neurology, 1920). We notice three astonishing facts about this book: (a) this information is to a large extent unrelated to current physiological re- search, since Cajal’s detailed description of the histology of many types of grey sub- stance has no equally detailed physiological analysis to match it; (b) no matter how rich in details, the book is really a sketchy account of the immense variety of neuro- logical structures and is not likely to contain an answer to any particular question about some particular structure in some particular species of animals; (c) the book contains reliable pictures but is nearly devoid of quantitative indications.

From these facts it is easy to predict that the future development of the science cf the brain will have to comprise a good deal more of the kind of histology exemplified by Cajal and that, wherever possible, the cumbersome histological descriptions will have to be replaced by more manageable quantitative structural rules. This is a trend which we associate with the names of Bok (1959), who called it ‘histonomy’, and Sholl (1956), who was the first, to my knowledge, to present measurements of ‘neuronal connectivity’ in such a form as to make them immediately applicable to mathematical models of nervous function (Beurle, 1954 a,b; Uttley, 1955).

The other prediction which is the point of this paper and reflects the programme of this laboratory, concerns the development of much closer ties between the ‘histono- mical’ descriptions of nerve nets and the electrophysiological evidence. Far beyond the widespread attitude of the neurophysiologist who uses microscopical techniques and a histological atlas only to check the position of his electrodes and consequently to associate the name of a structure with the name of an electrical epiphenomenon, we envisage a method of translating structural descriptions of nerve nets directly into functional schemes and hence, possible explanations of neurological structures as specialized logical ‘machines’.

Such a programme is likely to offend two separate idola tribus of biological sciences. For one thing it substitutes observation of existing structures and their interpretation in a functional way for direct physiological experimentation. It is a biological tabu not to infer a function from a structure, founded on a well documented series of

Page 170: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

H I S T O L O G Y , H I S T O N O M Y , H I S T O L O G I C 161

erroneous guesses in various fields of biology. In this specific case of the nervous system we may however point out that a large part of our active experiments on the brain, involving electrical stimulation with rather macroscopical electrodes, reveals facts that are orders of magnitude grosser than the conibinatorial complexity which the histological picture suggests, and that this may be for some time to come an in- surmountable difficulty in neurophysiology, at least as far as the central nerve nets are concerned, which cannot be reached by well-controlled peripheral stimulation. Because of this difference in the fineness of definition, if we propose to make cyber- netical analyses of central nerve nets at all, it is plausible that for some time it will be histology to lead the way.

The other idol is more serious. A functional interpretation of a certain histological structure may be taken as an explanation of the structure itself, and this is clearly a teleological explanation. The general methodological discussion is simple, since every teleological explanation of an observed biological pattern becomes a causal explana- tion when it is prefixed with the reasonable hypothesis of Darwinian evolution, con- verging toward optimally functioning organisms. But in practice we are aware of the obvious risk of mistaking evolutionary hang-overs or some consequences of enibryo- logical or mechanical necessities as an indication of a particular function. However, very soon one runs short of trivial ‘mechanical’ explanations when one is confronted with the stupendous variety of neuroanatomy and we may well use the ‘working bet’ that what we gain by applying our method weighs more heavily than the mistakes a’e are bound to make.

Should it be necessary to challenge those critics who propose embryological rather than teleological explanations of neuro-anatomy, I would point out the differences between the three large vertebrate cortices: the pallium, the tectum and the cerebellum: the first made up mainly of extremely characteristic ‘pyramidal cells’ of widely varying sizes, all aligned with their tops at the surface of the cortex; the other, the tectum, mainly built up of neurons reaching from the surface to the bottom and ramifying in a few special layers ; the third, the cerebellum, having an extremely characteristic type of lattice symmetry, with all output fibres stemming from Purkinje neurons which are alike except for position. It seems extremely unlikely that this could be explained merely by invoking principles of the mechanics of growth. However, I would ask the critic to explain why the dendrites in the bulbar olive as well as in the dentate nucleus ramify to form quite regular small spheres, whereas in the reticular substance they seem to reach as fas as possible; why the afferent fibres in the interpeduncular nucleus weave back and forth several times between the right and the left end of the nucleus etc.

M E T H O D

The method which we propose for the functional analysis of structure in the grey substance is the following. First we chose a piece of grey substance that can be easily recognized (e.g. by its position relative to other parts) in various species. The highly specialized ‘cortices’ of vertebrate brains, such as the cerebral and the cerebellar cortex, the optic tectum, the retina, seem promising because of their repetitive structure and Rcfcrmces p . I76

Page 171: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

162 V. B R A I T E N B E R G

because of the considerable differences between them, which we firmly believe to be an indication of essential functional differences. Having defined such a structure in various species, we observe interspecific differences, as well as local variations within one such structure (e.g. the areas of the cortex) as opposed to such characteristics which are constant throughout the structure and in all species. These invariant charac- teristics can be compounded into a description which we suppose to reflect the general functional scheme of such a structure. It is this general functional scheme which we must first understand.

A set of postulates taken from direct physiological evidence on single neurons and referring to such structural details as can be easily recognized in the histological picture can then be introduced. We may suppose that excitation is conducted de- crementally in dendrites, non-decrementally and with a speed proportionate to their thickness in axons, that it is transmitted from dendrites to the axon within the neuron and from axonal terminations to dendrites between one neuron and the next etc. There are some unfortunate lacunae in this histo-physiological dictionary for which we must make arbitrary assumptions, the most important being the impossibility to distinguish excitatory from inhibitory connections in histology. (For a discussion of this general procedure and of the ‘dictionary’ see Braitenberg, 1961a.)

ESSAY O N T H E C O R T E X

The case of the cerebellar cortex has been studied previously and was presented at the first medical cybernetics (Braitenberg, I961 b; Braitenberg and Onesto, 1960). Some plausible physiological assumptions together with the histological evidence allow us to translate the structure of the molecular layer of the cerebellum into a functional scheme which represents a clock in the sense that it will translate distances into time intervals vice versa. The encouraging fact about this result is that the functional interpretation gives in itself a teleological explanation of the most relevant details of the cerebellar structure, indeed of those anatomical features which are common to the cerebella of all vertebrate species.

The molecular layer of the cerebellum is a particularly fortunate object for our purposes since an ‘explanation’ can be given independently of the knowledge of the excitatory or inhibitory nature of the synapses involved, the temporal relationships being the most salient feature. In most other cases our ignorance about this funda- mental point makes itself felt in a much more serious way. Still, something can be said about a structure of logical relations even if we can never decide whether a relation is represented by a certain logical function or by its complement.

(Braitenberg and Lauria, 1960). The equation

An abstraction second hand of the cerebral cortex

where the function 1 (x) is defined as 1 for x > 0

(‘1 = o for x < o

Page 172: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HISTOLOGY, HISTONOMY, HISTOLOGIC 163

was introduced by Caianiello (1961) as a very flexible representation of the dependence of 'all-or-none' elements, upon each other, such as neurons.

We should like to have, for any two neurons of a piece of grey substance, represented by the functions Uh ( t ) and uk (t) , a measure of their coupling coefficients ur: and a,, which are, in general, different; we need in other words a measure of the excitation transmitted from one neuron to the other, a measure which may be positive or negative, representing, in the latter case, the phenomenon of inhibition.

For the case of the cerebral cortex direct recording with microelectrodes has not yet given any indication of the value of these coefficients, nor of their dependence upon time, due to slow decremental conduction in dendrites etc., for which the superscript ( r ) was introduced in the above equation. Also, we have no way of getting a direct measure of the thresholds S against which the excitation arriving at a neuron is pitted.

Therefore, it would seem desirable to obtain an indication of the functional con-

( r )

........

**,.A, l . 11 . .

...... ...... ...... ...... ...... ...... ...... . . . . . ...... ...... ...... ...... ...... ....*. ..,... I ::::::

Fig. 1. An abstraction from the histology used for the calculation of coupling coefficients between cortical neurons. Lefr: distribution of cell bodies of different types in layers I-VI. Cell bodies are drawn as if they were arranged in a cubic lattice; their density corresponds roughly to the actual density in human temporal cortex. Middle: Pu, Przr, Pv, Fw, Fvr, M, G are the seven neuron types of our abstraction. By the shape of their cell body they can be identified with the elements in the left part of the drawing. For each type we draw the dendritic field (solid outline), the axonal field (broken outline) and the connecting axon (arrow). Right: outlines (for most neurons only circles representing the region of the basal dendrites) of the dendritic fields of three columns of neurons (cf. left part of the drawing) to demonstrate the vast overlap. (From Braitenberg and Lauria, 1960.) References p . 176

Page 173: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

164 V. B R A I T E N B E R G

nections between neurons in the cortex from the anatomy. Such an undertaking might appear extremely difficult if one considers the large number of afferent and efferent contacts of each neuron - all the dendritic ‘spines’ of one neuron (supposing that they represent unitary synapses) and all the fine branchlets of its axonal ramification. There is, however, a consoling thought. For all we know, when a spot of excitation reaches the cortex from the periphery, it will most likely be carried by a bundle of neighbouring fibres, and even if it is carried by a single fibre it will be distributed as a cloud of excitation in the region of terminal arborization which even in the ‘specific afferents’ has a diameter of at least 300 p (Fig. 12). In other words, it is always with large patches of excitation that we must engage ourselves, and the detailed description of what happens in individual axo-dendritic contacts may be important for the question of memory, but is most likely irrelevant for the general blueprint of the cortex.

The following method of translation from morphological data into coupling co- efficients was therefore devised. It leads to a rough model which, however, is subject to improvement once more microphysiological data will be available on the single neuron and which may, through a process of trial and error, help us to fix some of the as yet indeterminate constants.

The histology is given as an abstraction obtained from the literature, mainly from Cajal (191 1). For every neuron we draw (Fig. I ) a simple geometrical body containing all its dendritic processes (dendritic field), another containing its terminal axonal ramifications (axonal field) and an arrow connecting the two.

From the variety of neuronal shapes we abstract a number of types on somewhat physiognomical criteria and determine their density and distribution in different layers of the cortex.

We define as the unit of excitation the amount of excitation E produced in 1 nim3 of grey substance by the arrival of an impulse in each of the axonal terminations contained in it. We suppose for each neuron of type T the excitation ell which it produces when firing should be proportionate to its size, say to the volume of its perikaryon VT, a measure which we can obtain quite easily from the preparations:

ell = c V T

This reflects the plausible assumption that the effect of the firing of a small neuron is less than that of a larger one. Denoting as NT the density of neurons of type T we have

Z NreT = Z NT c VT =- 1 T T

which gives for the coefficient of proportionality c:

In other words, in our abstraction, the excitation produced by a neuron in one firing expressed in units of excitation E is equal to the volume of its perikaryon multiplied by the grey-cell coefficient of Von Economo, since the fractioir

Page 174: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

H I S T O L O G Y , H I S T O N O M Y , H I S T O L O G I C 165

1

obviously represents this well-known measure of neuroanatomy. We suppose further that excitation is distributed uniformly in the volume A of the

axonal field, and that a measure ( D A ) / A of the excitation e~ of a neuron passes into the dendritic field of another neuron having an intersection of volume (DA) with the axonal field in question. The coupling coefficient ahk representing the amount of excitation received by a neuron un when the neuron U k fires becomes

v k ( D A ) h k Uhh^ = - ~ . ____.

Z N T V T As T

We may further suppose that the thresholds of the neurons are proportionate to the volume of their dendritic fields, or in other words they represent the normalized coupling coefficient .&k, i .e. , the density of excitation produced in a dendritic field as

- V k ( D A ) h k ah& ___. - ~ . ~

Z N T V T D h A k WWI3 T

and suppose all thresholds to be equal to a constant S. No matter how crude this may seem, our model has the virtue of containing much

of the information available from the histology and hence, for different structures, much of the structural differences which we would like to relate to the functional specialization. Thresholds remain arbitrary, but this may be not far from the truth since for any real nerve net (and for realistic models: e.g. Farley and Clark, 1961), we are compelled to postulate a threshold regulating mechanism which will keep the net, with varying input, on the narrow path between epileptic fit and coma*. The most serious shortcoming of our scheme is that it considers only positive values of excitation, for lack of the possibility of inferring the sign of the coupling from the histology. This, with time, will have to be corrected through direct physiological observations. All the other constants which we need can be directly read from the histology (or rather, histonomy). Some more facts (temporal relationships due to delay in axons and decremental conduction in dendrites) have been neglected in this rough scheme but could be easily incorporated in it.

Fig. 2. For explanation see text.

* See Prof. Ashby’s paper at this meeting.

References p . 176

Page 175: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

166 V. B R A I T E N B E R G

The trouble is rather technical, since the actual calculations of elementary geometry which are needed to measure, say the intersections of a rotational ellipsoid with bottle-

Fig. 3. Composite microphotograph of horizontal branches of the axon of a pyramil lower layers of the cortex of Bos Taurus. Golgi preparation.

dal cell of the

Page 176: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

H I S T O L O G Y , H I S T O N O M Y , H I S T O L O G I C 167

shaped structures of different sizes for various relative positions, constitute an un- wieldy job. This could be avoided if the distribution of dendrites and of axonal ter- minations of the various neurons were given in some analytic, rather than geometrical form. Such schemes have been proposed by Lauria in our laboratory.

Meanwhile, Fig. 2 may help us to grasp intuitively the connections of cortical neurons as abstracted in our model (Fig. 1). For each point at a certain level of the cortical thickness (marked by crosses on the diagram) we may draw the outline of a region (solid lines) which encloses all the dendritic fields of neurons whose axonal fields contain that point. These are the regions from where excitation can be trans- mitted monosynaptically to the given point. Conversely (broken lines) we may draw regions which contain all the axonal fields of neurons whose dendritic fields contain the point in question, i .e. regions to which excitation can be transmitted monosyn- aptically from that point.

You can think of it as a kind of optics, with lines conducting excitation in the place of ‘rays’. For each point of the cortex we have a region which can be ‘seen’ through it looking backward in time one synaptic delay: the region of possible sources of monosynaptic excitation to this point. Looking forward in time through the same hole we see the region where excitation will go next. There is a good deal of reflection

Figs. 4-12. Camera Iucidu drawings from Golgi preparations of cerebral cortex of Bos Taurus. Scale: 100 : 1. The two bars to the left and right of the drawings are ‘vertical’ in the histological sense: their inclination indicates the curvature of the cortex. Depth in the cortex is indicated in fractions of the cortical thickness to the left, as a function of the distance from the surface of the cortex (in

microns) to the right. See text.

in this optics, since for every point the afferent region has some overlapping with the efferent region. Each point, in other words, can be part of a reverberation of activity involving only two neurons, a finding which should be kept in mind when one looks for the place of reverberating circuits in the cortex. References p . I76

Page 177: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

168 V. B R A I T E N B E R G

SOME O B S E R V A T I O N S O N G O L G l M A T E R I A L

Over the past years we have been preparing in our laboratory a fair number of Golgi impregnations of the cerebral cortex of Bos Taurus. While this material will serve for a more systematic publication, we may show some samples here, mainly to discuss the preceding abstraction.

The choice of Bos Taurus, an animal which has been selected for many centuries according to anything but intellectual criteria, offers the possibility of comparing this material with the relatively well-known material of Homo sapiens. The common features promise to reflect the basic functional scheme of the cortex, while the specific differences may point to the substrate of specific human performances. One of the dangers of comparisons between anatomical and psychological phenomena lies in the fact that, when we observe our own behaviour, we are mostly aware of the most complex phenomena, while in anatomy we limit ourselves to the consideration of small meshes of the total net. We tend to establish correlations at non-corresponding levels of complexity. Our material on Bos Taurus will serve as a warning, as it will show that the elementary meshes are surprisingly similar in the bovine and in the human cortex.

Fig. 3 shows some branches of the axon of a tangentially cut pyramidal cell of the lower layers of the cortex. The photograph is a composite of 58 individual plates, a procedure which is made necessary by the fineness of the axonal processes (requiring good definition) and their wide three-dimensional spread (requiring great focal depth), two attributes which make cortical neurons some of the most non-photogenic objects imaginable. One has to resort therefore to the old method of camera Iucida drawing (Figs. 4-12).

It is important to note the scale of the drawings, 100 : 1. While in Nissl preparations we are astonished to see how many neurons there are, Golgi pictures tend to surprise us with the size of these objects. Dendritic as well as axonal trees are quite common of the order of 1 mm or so, in the smaller neurons at least hundreds of microns across.

Fig. 5. For legend see Fig. 4.

-.- "I Fig. 6. For legend see Fig. 4.

I000

ino

200

300

LOO

500Ulll

Page 178: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HISTOLOGY, HISTONOMY, H I S T O L O G I C 169

Counts and measurements together inform us of the vast cornpenetration of the neurons as one of the most striking characteristics of the cerebral cortex. That this is not true for all types of grey substance should be clear considering the cerebellar cortex, where large parts of the axon collaterals (the parallel fibres) are truly parallel,

Fig. 7. For legend see Fig. 4.

I I

. . . . ....-....... ~ ~ ....~. - - - - - ------.-- .-..... . . . _.__. _ _ _ .

Fig. 8. For legend see Fig. 4.

References p . 176

Page 179: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

170 V. B R A I T E N B E R G

and the dendritic ramifications of Purkinje cells are distinctly separated from each other.

One of the most burning questions raised by all preceding abstractions of the structure of the cerebral cortex, including our own, set forth in the preceding pages, is that of the justification of distinct types of cortical neurons. This question requires a statistical answer which would be premature on the basis of our present material (about 250 camerc lucida drawings of cortical neurons with good axonal ramification). We may, however, give this preliminary account.

Neurons which cannot be classified in one of the following three types are rare in our material (only abmt 1/10).

Small, stellate cells, i.e. cells whose dendrites are arranged in roughly spherical symmetry around the cell body, located at or slightly above or below the middle of the cortical thickness, have short axons very densely ramified in a small region with a diameter comparable to that of the dendritic ramification, and greatly overlapping the latter. These cells form the ‘granule layer’, or fourth layer of cytoarchitectonics (Figs. 4-6).

Neurons whose dendritic ramification follows a pattern of cylindrical symmetry

I

Fig. 9. For legend see Fig. 4.

Page 180: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HISTOLOGY, HISTONOMY, HISTOLOGIC 171

around a vertical axis, reaching vertically over larger distances than horizontally, have descending axons with sparse, roughly horizontal collaterals. These are, of course, the well-known pyramidal cells. Their size, including the length of the collaterals, varies with the position of the cell body in the cortical thickness, the larger ones being found in the lower layers (Figs. 7-9). (This has been worked out quantitatively, for the dendrites, by Bok, 1959).

Neurons with a few long obliquely descending dendrites have ascending axons which ramify in a tree not as dense as the first type but not quite as sparse as the second. They are found in the lower half of the cortex (Figs. 10, 11).

If we consider only the four criteria large or small, dense or sparse axonal branching, axon upward or downward, dendritic pattern stellate or cylindrical, we obtain sixteen

....

.....

...................

i .......................... ................ A

....

....

................................................

Fig. 10. For legend see Fig. 4.

References p . I76

Page 181: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

172 V. B R A I T E N B E R G

Fig. 1 1. For legend see Fig. 4.

possible descriptions of which we find only nine occupied by neurons of our collection, a fact which strongly suggests the distinction of types rather than a continuous gra- dation, even at this gross level of description. Thus, for instance, we found no large pyramidal cells with descending axon and dense terminal arborization, no large stellate cells with descending axon and dense terminal arborization, no small neurons with ascending axon and sparse arborization which were not stellate, etc.

Another tacit assumption used in our abstraction concerns the possibility of re- placing the actual ‘trees’ of dendrites and axons with continuous regions for which only a certain density is given (see, e.g., Sholl, 1956). This was done in the belief that the axonal spread is really much more diffuse and regular than the illustrations (by Cajal) on which our abstraction was based would show. Our own material leads us to correct this belief, in the sense that we have no reason to suppose that the sparse

Page 182: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HIS T O LOGY, H I S T O N OM Y , H I S T O L O G I C 173

600

700

BOO

100

000

100

lO0pm

Fig. 12. For legend see Fig. 4.

axonal trees of pyramidal cells in our preparations are really incomplete, since in the same preparations other types of cells show a much denser axonal ramification. It must be remembered, however, that our description of ‘clouds of excitation’ being produced by all kinds of neurons, diffusely in their ‘axonal field’, may still not be too far from the truth, since in our scheme we have mostly to suppose groups of neigh- bouring pyramidal cells being excited together, and the sum of their greatly overlapping axonal fields will produce, when firing, a density of excitation compatible with our description.

Besides granular cells, also fibres entering the cortex from below (most likely to be

Fig. 13. This family of curves represents for seven points of the human parietal cortex (1.5-2 mm apart) the optical density of myelin preparations as a function of the distance from the surface of

the cortex. (From Vota, 1961.)

References p . 176

Page 183: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

174 V. B R A I T E N B E R G

identified with the ‘specific afferents’ of other descriptions) have very dense terminal arborizations (Fig. 12).

That the Golgi stain of our preparations, which shows few collaterals for the axons of pyramidal cells, does not leave out much, may also be argued from the following indirect evidence (Figs. 13, 14).

/ , ~ ; //* : : : ’

// . ,/ -’

c

* - y\y* * * * 0 * -t

I I ‘It I m 1 I p ‘ P SI

Fig. 14. Diagram to show the expected density of myelinated fibres in different layers of the cortex if one allows a number of afferents to the upper part of the cortex corresponding in density to the efferent fibres, besides for each pyramidal cell a fibre to the white substance and two horizontal

collaterals of a fixed length. (From Braitenberg, 1962.)

Measurements of the optical density of myelin preparations of the cortex can be taken as representing the density of myelinated, i .e. long, unbranched axons and collaterals. A set of plots of the optical density as a function of depth, obtained from neighbouring points of the parietal cortex (by Vota, 1961 in our laboratory) is shown as an example in Fig. 13. These curves can be compared with a diagram, Fig. 14 (taken from Braitenberg, 1962) showing the contributions of various types of elements (as- cending and descending fibres, horizontal collaterals) to the myelinated fibre popula- tion, as we can expect them on the basis of the Golgi picture. It can be argued that the shape of the actual densitometric curves is compatible only with a small number (about two or three) of horizontal collaterals a few hundred microns long for each descending axon, since otherwise the two maxima of fibre density known as ‘strias of Baillarger’ (G and B on our diagram) would be much more pronounced over the back- ground of the vertical fibre population.

C O N C L U S I O N

As our own histological (and histonomical) observations on the cerebral cortex prc - gress, ~e are becoming more confident in our attempt at translating from the structure of the grey substance into sets of neuronic equations. It is well possible that this direct

Page 184: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HISTOLOGY, HISTONOMY, H I S T O L O G I C 175

translation from the structure will become not only a useful, but also a necessary method of neurophysiology, if intrinsic limitations of electrophysiological recording techniques will prove to be serious. Unfortunately, a common difficulty is super- imposed on both the difficulties of our histophysiological translation and of electro- physiological techniques, since the results which either method can gike at bzst, i . ~ . tables of functional relations between individual points or elements of the grey sub- stance, are nothing but the raw material for a theory which will show how the total activity results from the individual couplings. And this, we are informed by physicists, is a very arduous problem in itself.

A C K N O W L E D G E M E N T S

Research described in this paper was sponsored in part by the European Office, OAR, United States Air Force.

S U M M A R Y

Some aspects are discussed of histology, histonomy and histologics in relation to cybernetics.

If we turn to histological methods to satisfy our cybernetical curiosity about the patterns of connections in real animal nerve nets, there is indeed reason to set apart our activity from traditional histology, as our host has done, by christening it with some new name, such as histonomy. If we are inclined to abstract nerve nets into nets of logical relations, we may call this facetiously histologics.

Here the difficulties are of various kinds. Many physiological results (about con- duction vclocity in axons, decrement in dendrites, synapses etc.) can be immediately read into the histological picture with some assurance.

But e . g . the morphological distinction of excitatory and inhibitory synapses is still highly debatable, and no way is known to infer the threshold of a neuron from the histology. Another difficulty regards the formalism, since even if we decide to represent a piece of grey substance hypothetically as a table of coupling coefficients between neurons arranged in a certain geometry and even if this geometry is highly repetitive, we are embarrassed when trying to handle logicd formules containing a million or more variables. This should not deter us from trying. Isolated aspects of the logics can be inferred from aspects of the histology (types of symmetry, maximum logical depth possible, degree of complexity). In some cases (cerebellar cortex) inspection and intuition may suffice to produce plausible logical schemes. In the case of the cerebral cortex which will be illustrated with material from our own Golgi analysis, one has to be contented with partial insights. A computer program representing a small piece of cortical grey matter, which is being prepared by F. Lauria in our laboratory may help us fix some of the unknown constants in the logical scheme through a process of trial and error.

Refiwnces p. I76

Page 185: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

176 V. B R A I T E N B E R G

R E F E R E N C E S

ARIENS KAPPERS, C. U., (1920); Die vergleichende Anatomie des Nervensystems der Wirbeltiere und

BEURLE, R. L., (1954a); Activity in a block of cells capable of regenerating pulses. RRE Memorandum,

BEURLE, R. L., (1954b); Properties of a block of cells capable of regenerating pulses. RRE Memoran-

BOK, S. T., (1959); Histonomy of the Cerebral Cortex. Amsterdam, Elsevier. BRAITENBERG, V., (1961a); Funktionelle Deutung von Strukturen in der grauen Substanz des Nerven-

BRAITENBERG, V., (1961 b); Functional interpretation of cerebellar histology. Nature, 190, 539-540. BRAITENBERG, V., (1962); A note on myeloarchitectonics. Journal of comparative Neurology, 118,

BRAITENBERG, V., AND LAURIA, F., (1960); Toward a mathematical description of the grey substance

BRAITENBERG, v., AND ONESTO, N., (1960); The cerebellar cortex as a timing organ. Discussion of an

CAIANIELLO, E. R., (1961); Outline of a theory of thought-processes and thinking machines. Journal

CAJAL, S. R., (1911); Histologie du SystBrne Nerveux de I'Homme et des Vertibrks. Madrid, C.S.I.C. FARLEY, B. G., AND CLARK, W. A., (1961); Information Theory: Fourth London Symposium. C. Cherry,

SHOLL, D. A., (1956); The Organization of the Cerebral Cortex. London, Methuen and Co. UTTLEY, A. M., (1955); The probability of neural connexions. Proceedings of the Royal Society, Series

VOTA, U., (1961); Unpublished observations.

des Menschen. Haarlem, Erven F. Bohn.

1042.

dum, 1043.

systems. Die Naturwissenschaften, 48, 489496.

141-156.

of nervous systems. Nuovo Cimento, Suppl. 2, 18, 149-165.

hypothesis. Proceedings 1st International Conference on Medical Cybernetics (p. 1-19).

of Theoretical Biology, 2, 204-235.

Editor. London, Butterworths.

B, 144, 229-240.

Page 186: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

177

A Discussion of the Cybernetics of Learning Behaviour

G O R D O N PASK

System Research Ltd. Richmond Surrey (Great Britain)

CONTENTS

1 . Previous models of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 1.1. Psychological and physiological models . . . . . . . . . . . . . . . . . . . . . 178 1.2. The approach of cybernetics . . . . . . . . . . . . . . . . . . . . . . . . . . 178

2 . The conception of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 2.1. Adaptive behaviour and learning . . . . . . . . . . . . . . . . . . . . . . . . 179 2.2. The implications of learning . . . . . . . . . . . . . . . . . . . . . . . . . . 180 2.3. The criterion of identification . . . . . . . . . . . . . . . . . . . . . . . . . 180 2.4. Restrictions imposed upon mechanisms and models of learning . . . . . . . . . . . 180 2.5. Finite automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 2.6. Need for many representations . . . . . . . . . . . . . . . . . . . . . . . . . 181 2.7. The field of enquiry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

3 . Experimental verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 3.1. Difficulty over linguistic ideas . . . . . . . . . . . . . . . . . . . . . . . . . 182 3.2. Environments where learning occurs . . . . . . . . . . . . . . . . . . . . . . 182 3.3. Tests for learning behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . 185 3.4. Systems capable of learning . . . . . . . . . . . . . . . . . . . . . . . . . . 186 3.5. The mechanism of learning . . . . . . . . . . . . . . . . . . . . . . . . . . 189

4 . Evolving organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 4.1. Evolutionary systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 4.2. Invariance of an organisation . . . . . . . . . . . . . . . . . . . . . . . . . 191 4.3. The origin of the indeterminacy associated with learning . . . . . . . . . . . . . . 191 4.4. Identification of many evolutionary systems in a brain . . . . . . . . . . . . . . . 192

5 . Physical mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.1. Particular model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2. Internal environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5.3. Memory systems, learning mechanisms, and the evolving organisms . . . . . . . . . 193 5.4. The a system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.5. The p system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.6. Identification between a and p systems . . . . . . . . . . . . . . . . . . . . . 196

6 . The evolution of organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 6.1. Organisms as decision makers . . . . . . . . . . . . . . . . . . . . . . . . . 197 6.2. Interaction with evolving organisms . . . . . . . . . . . . . . . . . . . . . . . 198 6.3. A self organising system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6.4. The mechanism of evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 200 6.5. The necessity for evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 200 6.6. Learning related to evolution . . . . . . . . . . . . . . . . . . . . . . . . . 200

7 . Digression into physical assemblies . . . . . . . . . . . . . . . . . . . . . . . . . 201 7.1. Solving an undecidable problem . . . . . . . . . . . . . . . . . . . . . . . . 201 7.2. Decision in unstable equilibrium . . . . . . . . . . . . . . . . . . . . . . . . 201

References p . 212-214

Page 187: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

178 G O R D O N P A S K

8. Different kinds of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 8.1. Forms of learning mechanism . . . . . . . . . . . . . . . . . . . . . . . . . 202 8.2. Relation between organisations . . . . . . . . . . . . . . . . . . . . . . . . . 203

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 0 4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1 2

1 . P R E V I O U S M O D E L S OF L E A R N I N G

I , 1. Psychological and physiological models Psychological models of learning behaviour normally describe what changes will

occur as a result of a stipulated experience; they consider the ‘before’ and ‘after’ stationary states of the organism, and, on the whole, neglect the non stationary transition. It is true that statistical models (Estes, 1950; Bush and Mosteller, 1955; Siegal, 1959) make a broad comment upon the actual process, but their predictive value is confined to parameters such as the number of trials required to reach a given criterion of performance. Few of the psychological models propose a mechanism for learning. Those that do are either testable but restricted, e.g. Brown’s (1958) model for short term memory, Broadbent’s (1958) filter model or, as in the case of Hull (1952) and Eysenck (1956) there is some doubt about the status of the intervening variables that are introduced. Without exception, the testable models are open to the criticism that they deal with learning in very restricted conditions such as a Skinner box (1938) or a maze, and consequently, although they refer to adaptive behaviour amongst a specified set of alternatives their validity as models of learning depends upon the assumption that this facet of behaviour can be regarded as relevant and the rest of it (the maundering of the rat, the idle thoughts of the man) can be neglected. The evidence seems to contradict this supposition and (whilst it is difficult to offer an alternative) it is fair to say that this form of model refers to reinforced selection and completely omits the issue of attention.

Physiological models (Pavlov, 1927; Sherrington, 1948; Hyden, 1960; Bok, 1961) rightly avoid commitment to learning as such and describe some process within the brain. Thus Pavlov was acutely aware of the difficulties of extending his conditioning models beyond the experimentally verified region. In general, with any physiological model, it is necessary to distinguish possibility from actuality with great care. A con- ditioning mechanism, for example, undoubtedly could account for adaptive behaviour ; but that is different from saying it does account for this behaviour in every organism or upon every occasion.

1.2. The approach of cybernetics Some cybernetic models are derived from a psychological root, for example, Rosen-

blatt’s ( 1 961) perceptron and George’s (1961) automata stem largely from Hebb’s (1949) theory. Others, such as Grey Walter’s (1953) and Angyan’s (1958) respective tortoises, have a broader behavioural antecedent.

On the other hand, neurone models, like Harmon’s (1961) and Lettvin’s (19591, are based upon facts of microscopic physiology and have the same predictive power linked to the same restrictions as an overtly physiological construction.

Page 188: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 179

Next, there are models which start from a few physiological facts such as known characteristics or connectivities of neurones and add to these certain cybernetically plausible assumptions. At a microscopic level, McCulloch’s (1960) work is the most explicit case of this technique (though it does not, in fact, refer to adaptation so much as to perception) for its assumptions stem from Boolean Logic (Rashevsky, 1960, describes a number of networks that are adaptive). Uttley (1956), using a different set of assumptions, considered the hypothesis that conditional probability compu- tation occurs extensively in the nervous system. At a macroscopic level, Beurle (1954) has constructed a statistical mechanical model involving a population of artificial neurones which has been successfully simulated, whilst Napalkov’s (1 961) proposals lie between the microscopic and macroscopic extremes.

Cyberneticians are naturally concerned with the logic of large systems and the logical calibre of the learning process. Thus Willis (1959) and Cameron (1960) point out the advantages and limitations of threshold logic. Papert ( I 960) considers the constraints imposed upon the adaptive process in a wholly arbitrary network, and Ivahnenko (1962) recently published a series of papers reconciling the presently opposed idea of the brain as an undifferentiated fully malleable system and as a well structured device that has a few adaptive parameters. MacKay (1951) has discussed the philosophy of learning as such, the implications of the word and the extent to which learning behav- igur can be simulated; in addition to which he has proposed a number of brain-like automata. But it is Ashby (1956) who takes the purely cybernetic approach to learning. Physiological mechanisms are shown to be special cases of completely general systems exhibiting principles such as homeostasis and dynamic stability. He considers the behaviour of these systems in different experimental conditions and displays such statements as ‘the system learns’ or ‘the system has a memory’ in their true colour as assertions that are made relative to a particular observer.

2. T H E C O N C E P T I O N O F L E A R N I N G

2.1. Adaptive behaviour and learning

The adaptive behaviours called ‘learning’ are inadequately demarcated. As a matter of logic, ‘learning’ must be held apart from adaptation alone (although adaptation always accompanies ‘learning’), for w:thout this dist’nction, most mechanistic com- ments on the subject become vacuous. Ashby has shown that any large dynamic assembly must adapt and retain some mark of its previous configuration. Hence, to say that a brain or a large artifact does exhibit this behaviour, is an undiscriminating and almost tautologous remark. By the same token, ‘memory’ (inferred from an ob- served learning behaviour) must be distinguished from ‘retention’.

Another of Ashby’s results (1958) has demolished the once popular belief that ‘learning’ is simply an adaptation directed to some objective, like sustenance of the learning organism. This view is far too naive; all stable systems are homeostatic in a tenable environment and all homeostasis can be credited with an objective. If the environment is perturbed in a particularly regular fashion the adaptive changes that Rcfereerences p . 212-214

Page 189: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

180 G O R D O N P A S K

maintain the homeostasis are called habituation which is a predictable consequence of elaborate structure.

2.2. The implications of learning It is generally agreed that over and above directed adaptation learning entails the

construction and appreciation of order amongst objects and their images. An organism that learns becomes able to deal coherently with its environment, or the flux of its autonomous activity, by imposing a plan upon otherwise haphazardly arranged con- stituents. Minsky and Selfridge (1 960), in the idiom of ‘artificial intelligence’ require generalisation and the development of ‘a heuristic’ as indices of learning, whilst MacKay (1956) has maintained the very similar view, that learning of any kind entails building an hierarchy of languages or systems of signs and relations. The hierarchy is realistic insofar as some translation can take place between these different levels of discourse.

2.3. The criterion of identijication For the present purpose we shall concentrate upon the act of identification, or in

Carnap’s (1947) sense, an interpretation of some nonlogical sign in a calculus in terms of a designatum, as the crucial feature of learning (‘translation of a message between languages’ and ‘identification of a pair of otherwise incomparable sign sequences’ are operationally equivalent). Consequently an organism that learns must also be able to identify relevant stimuli (or values of attributes in its environment), to bring its at- tention to bear upon a limited universe of discourse which includes these stimuli and the response alternatives amongst which adaptation occurs. Similarly, it must form and use named ‘concepts’ or generalisations, the crux of this process being identifi- cation between distinct concepts (called ‘insight’) rather than the aggregation of common features which is a relatively trivial result of constrained adaptation (notice that this kind of ‘identification’ is a ‘translation’).

2.4. Restrictions imposed upon mechanisms and models of learning Once a more sophisticated view of learning has been accepted, we are forced to

discard most of the neat and familiar representations of mental or neural activity. In particular, the restricted models of metricised information theory are inadequate vehicles for a description of any mechanism supposed to account for learning behaviour (Pask, 1958a,b). Of course, there is a sense in which information is communicated and transformed when an organism is learning but as Greene (1961) points out (when he remarks that percepts carry with them overtones of other messages) this is not the sense of any metricised information theory. McCulloch and MacKay (1952) said much the same when they discussed the status of ‘signals’ in a brain. Information is rep- resented in various ways, for example, in terms of impulse rate and impulse phase relation and neither an impulse nor any other event is unambiguously a signal.

2.5. Finite automata Now information theory is applicable within the framework of any one represen-

Page 190: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 181

tation. This representation defines a set of permissible outcomes and messages and codes. Similarly, any one representation defines a set of abstract models which can be identified with a physical brain, by an observer who specifies his state description and forms a system in the sense of Ashby. Certain of these systems are isomorphic or homomorphic images of one another, and within such a well related set of systems there is a calculus equivalent to the calculus of information theory. A typical abstract model is a McCulloch and Pitts (1 947) network of ‘artificial neurones’, a typical system, its identification with attributes of some part of a brain, and in this case the transformable set of systems is the set of (possibly restricted) finite automata.

But there is no guarantee that information theory or any other calculus is applicable between representations (indeed we must assume the contrary).

2.6. Need for many representations Now we have argued that learning is crucially dependent upon an act of identifi-

cation, in other words, upon a relation between different representations of data. But in this case, the process of learning cannot be described, or at best, can rarely be described, in terms of a single information system. Normally it will be necessary for an observer who investigates learning to reconstruct his identification and remake his state description whenever the observed organism ‘changes its attention’, or exhibits ‘insightful behaviour’ (that is, whenever it changes its identification).

No single system can represent the mechanism that underlies learning. No finite automaton can be held to learn. A search for the mechanism of learning is either unduly parochial or misconceived for it is the activity of many systems, not one, that is correlated with learning behaviour (or, we can say, it is a single self organising system, as the phrase is used by Von Foerster, 1960, of which the separate systems are components, related by the appearance of learning and possibly in no other way).

2.7. TheJield of enquiry It is important to recognize that these comments apply to a field of enquiry, namely

‘learning behaviour’ and not to all facets of mental or neural activity. Thus Harmon (1959) is on firm enough ground when he says that impulses are the minimal com- ponents of messages in the language of neurones, simply because Harmon is not directly concerned with learning and is consequently able to consider events in a single language system. But since we are concerned with learning, the pertinent en- quiries refer to many language systems (and their structure is changed by the process being”investigated). Thus we cannot afford such elegant and convenient models nor at thempresent state of knowledge can assertions about learning be reduced, say, by Nagel’s (1 949) procedure, to expressions in the conceptual framework of neural networks.

On the’other hand, it would be just as absurd to suggest that networks and finite automata-are valueless in the role of explanatory devices as it would be to suggest that bond’models of chemical molecules are valueless explanatory devices in reaction chemistry. For whilst it is legitimate to remark that neurone networks (or bond mode1s):are unable to answer our enquiries concerning the mechanisms of learning References p . 212-214

Page 191: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

182 G O R D O N P A S K

(or the dynamics of a reaction) probabilistic combinations of either model have predictive value. Their merit is conceptual simplicity. Their weak point is that given a probabilistic combination which seems to fit the facts, we are left with an indetermi- nacy* (Pask, 1962; Von Foerster and Pask, 1960/1961), or uncertainty regarding the form of organisation ultimately responsible for the observed behaviour.

This indeterminacy comes from speaking in terms of one language about a construct that entails many languages, and it can only be removed by an act of identification, in particular, if the observer translates expressions from the language of network model or bond model descriptions, into the corresponding expressions in a more comprehensive language or metalanguage.

3. E X P E R I M E N T A L V E R I F I C A T I O N

3.1. D f j cu l t y over linguistic ideas In order to pass from abstract organisations to physical assemblies, like the brains

of men, animals and artifacts, we shall construct several experimental tests for learning. The greatest difficulty at this stage, is confining our discussion to the subject of learning. In an attempt to avoid the whole gamut of linguistic behaviour, 1 may do violence to many linguistic ideas by inadvertent mistakes or by deliberate but ille- gitimate specialisation.

3.2. Environments where learning occurs The primitive constituents of a test situation are an organism (said, after testing,

to lenrn or not to learn), its environment of objects and actions and its observer. Depending upon the extent to which he infers similarity with the organism, the observer may produce and respond to the actions and objects that constitute the environment. If the organism is a man, the observer is assumed to infer considerable similarity, if it is an animal or an artifact rather little similarity. But, in any case, the inference is made on grounds that are remote from the experimental evidence.

There is always a sense in which the environment is commonly named by the organism and the observer. ( I ) Given a strong inference of similarity the set of possible objects and actions is taken to be identified in substantially the same way by the observer and the organism, hence to constitute on the one hand, a set of stimuli, or, in the case of verbal discourse of words, on the other hand, a set of nameable response alternatives, which may also be words or sequences of words. (2) Lacking a strong inference of similarity the observer must be able, by previous observation or stimulus equivalence investigations, to describe the effective stimuli (and place them in corre- spondence with the objects and actions he perceives), and to describe the response alternatives for the organism. Thus Lettvin et al. (1959) described the perceptual attribute space for the frog and described how a frog could respond.

* The word ‘indeterminacy’ is used to distinguish this uncertainty from the more familiar classical uncertainty, given a set of relevant attributes defining a system, regarding the state of the system. Whereas a ‘classical’ uncertainty is reduced by making more microscopic observation, ‘indeterminacy’ is not. Although classical uncertainty may often perturb real observations of brains, it has little to do with the immediate discussion.

Page 192: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V l O U R 183

We thus characterise the test environment by a set X of signs x for objects and actions affecting the organism as its input, and a set Y of signs y for responsive actions affecting objects in the environment and which constitutes its output. Manifestly, if se- lections from X and Y are to be signals representing information, neither X nor Y can be unrestricted. Given this, we call X u Y thejnite vocabulary of an operational language (that is, a system consisting of the signs in X or Y and relations that appear as invariants of the organism’s behaviour), whereby the organism indulges in discourse with its environment. Tentatively, x E X will be identified with ‘subject’ words and the output signs y E Y with ‘predicate’ words and the organism itself as any operator capable of performing the ‘and so’ operation of coherent discourse to produce utter- ances (xy) (in the sense of M. Mastermann, 1958). The least organism of this kind (which features in our formal discussion) must contemplate or take into account at least the possible pairs x, y , of subjects and predicates when selecting some y from Y.

We should emphasize that any construction that involves organisms and operational languages is specified relative to a scientific or descriptive language A (used and compre- hended by a body of observers). There is no guarantee that the structure of the operational languages will be preserved if the descriptive language is changed, say from A + A*. It may be possible to represent systems of operational languages in a manner that is invariant with transformations 2 -+ A*. The recent discussions of cybernetic logic by Gunther (1962) indicate that this can be achieved, at any rate in principle. But, whilst admitting that relativism is unsatisfactory, we shall not discuss the issue any further in the present paper.

Now a scientific language is a metalanguage, in terms of which an operational languages is described. We shall concentrate upon those systems constructed in 2, within which it is possible to distinguish more than one operational language.

Let us denote these operational languages a, ,B, and their finite vocabularies X a u Yay Xo u Yo. Let permissible or sensible utterances have the form U = ( x , y,) and denote the set of permissible a utterances Ua C Xa @ Ya and the set of per- missible p utterances Uo C Xo 0 Yo.

It is not too difficult to specify the conditions that must be satisfied if a and @ are to exist in our I , description of a system if we conduct the discussion in rather vague terms. Hence, we shall give a broad account and become a little more precise, when we need to be, later (see Appendix).

(1) If a and p are operational languages in A it must be possible to distinguish (also in the A description of the system) distinct organisms capable of using these languages for communication.

(2) These organisms must belong to at least a couple of different species (say the A species of organisms and the B species of organisms), and communication must occur between the members of each species of organisms; being characterised in the case of a by utterances u E Ua and in the case of ,8 by utterances u E Uo.

(3) In order that a and ,B be distinct, the A system of communication must be distinct from the B system. The distinction could be secured in various ways of which only one will be cited.

(4) (I) There must be at least some signs, used in the a discourse, which do not References p. 212-214

Page 193: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

184 G O R D O N P A S K

appear in the discourse and vice versa, hence ( X u u Ya) n ( X s u Yo) C ( X u u Ya) u (XP u Y 4 . (11) I t will be convenient to assume that Ua and UP are disjoint. Thus [Va n U P ] = 0 where 0 denotes the empty set. (111) The intention behind (11) is that U a and UP are sets of utterances which enjoy the property, for any uo E Ua that ‘UO

means the same thing to all organisms of the A species’ and similarly for any u1 E that ’u1 means the same thing to all organisms of the B species of organism’. (We shall make the relation of ‘meaning the same thing’ more precise, later. For the moment, assume that a given utterance ‘means the same thing’ to a pair of organisms if, when received in the same conditions, it induces the same behaviour). (IV) Hence [ Va n US ] = 0 implies that there is no u which means the same thing to all A species organisms and all B species organisms.

with respect to universally interpreted signs) is a shade more restrictive than necessary, although it will be convenient to adopt it. But in any case we should insist that a and are neither isomorphic nor homomorphic images of one another when the ‘and so’ precedance of x, y , is interpreted as a composition.

(6) However, there must be a possibility of ‘translation’ between a and tfl or vice versa. We shall interpret a translation as a special transformation that has the index of the particular organism that performs this translation as its parameter. For this purpose, we introduce the concept of a limited area of ‘meaning’. An a sign may mean the same thing to one member of the A species of organism as some b sign or some set or some sequence of b signs (but this cannot be a universal implication, true for all members of the A species of organism). Similarly, translation can be employed by B organisms to interpret a signs.

We shall argue that the possibility of ‘translation’ rests upon a similarity between the A species organisms and the B species organisms which exists even though they use distinct communication systems. (In our discussion we shall be chiefly concerned with similarities of physical structure or similarities in the relation between these organisms and a common environment, but there are many other possibilities. Thus a French and an Italian enjoy similar cultural backgrounds, and their discourse has a similar semantic content.)

(7) The a operational language and the b operational language remain unrelated with respect to the universally true assertions of the participating organisms. But they are, of course, perfectly well related from the viewpoint either of the I observer or a particular organism that identifies or translates a set of utterances, hence, according to our criterion, an organism that learns.

characterise communication systems capable of sustaining the same level of discourse or (2) that a is a meta- language with reference to the operational language (and is thus capable of de- scribing some aspect of /3) or, vice versa, that b is a metalanguage with reference to a. If a is a metalanguage relative to ,8 (or vice versa) the functions determining a dis- course, are of higher order and propositions in a are of a higher logical type than the corresponding logical functions and propositions in f l ; the hierarchy of orders or types being determined by elementary propositions in I . To elaborate the point, a

(5) This condition (which entails complete lack of relation between a and

The possible relations a, ,!I and b, a are (1) that a and

Page 194: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G BEHAVIOUR 185

signs which might, for example, include ‘furniture’ cannot be derived from /? signs, such as ‘table’ or ‘chair’. To say that ‘furniture’ cannot be derived from expressions in ,!l means (a) that the category of furniture regarded as a disjunctive concept in ,!? must be defined ostensively by exemplars. But the choice of these exemplars is un- limited (we can only specify furniture as the particular tables and chairs and other objects pointed out by a p organism). (b) If the category ‘furniture’ is regarded as a conjunctive concept entailing those attributes common to tables and chairs, the choice of attributes is unlimited (and we can only specify the particular subset selected by this p organism).

(8) This organism specificity suggests that whatever the logical status of a and (from the 1 observer’s point of view) any participant B organism, performing a trans- lation from a to ,!? must use as a metalanguage. For this B organism must construct a description, based upon its experience, of the usage of a terms. Indeed we shall argue that it must construct within itself an image of an A organism and look at the environment in an A-like representation.

In the limiting case, which we shall examine in connection with the mechanism of learning, the analogues of A and B organisms have a repertoire of utterances which completely specify their behaviours. These organisms could be manifest in no other way, and in order to exist, they must make precisely these utterances. A complete a to ,!l translation performed by a B organism thus amounts to constructing an infor- mational replica of an A organism in the medium of the B organism concerned. In other words, an a + ,!l translation amounts to reproducing an A organism. We shall return to this point in a moment; its immediate relevance is that it demonstrates an identity between the chosen criterion of learning and ideas which appear in Wiener’s (1961) account of the reproductive process, which he is concerned with developing at the moment (Wiener, 1962). Specifically we shall identify learning with a cybernetic property of organisations that reproduce or evolve in media like brains and social aggregates and will argue that at least in certain special cases (Wiener’s concept applies to the general case of the ‘Black and White Box’ reproductive system) the fact of evolution (which entails some reproductive mechanism) will also entail the manifes- tation of learning behaviour.

3.3. Tests for learning behaviour Learning occurs insofar as the system said to learn, is able to identify utterances or

signs in its operational language with novel designata; if these designata are utterances in a different operational language, the organism can learn insofar as it can perform a translation.

Tests of this ability may involve either control or communication and each kind of test is fairly commonly used. So far as control is concerned, the experimenter specifies and the subject agrees or is known to accept an objective which can only be realised either (I) by recognising attributes of events or patterns of distinct stimuli which have not been encountered upon previous occasions (these are commonly regarded as tests of percept‘on or abstract’on), or (11) by constructing a novel categorisation of objects in the environment (these are grouping tests). In either case the subject is required to References p. 212-214

Page 195: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

186 G O R D O N P A S K

identify signs in his own language with signs in the language of a feedback loop, stabilising the desired objective, which the experimenter has contrived in the environ- ment. Of course, only a particular form of learning is detected, namely a form which entails order relations that the experimenter can apprecidte hlmself.

So far as communication is concerned the test situation is normally an interview (a special case is considered by John Clark). We assume that any pair of men have different operational languages or equivalently different ‘ways of thinking’ which they prefer to use. (The test is rarely applied to animals because of obvious but not in- surmountable difficulties.) In conversation, ‘translation’ occurs in the sense that the compromise needed to maintain coherent discourse about an unfamiliar topic involves the construction of a language or ‘way of thinking’ that is commonly intelligible to the participants. Hence the interviewer who is anxious to evaluate learning endeavours to estimate the other participants’ ability to communicate in this situation, discounting, so far as possible, the effect of extraneous variables, such as ‘diffidence’ or emotional bias. The disadvantage of this method is that the observer who conducts the test is, himself, a participant and his evaluation must be translated into terms of A, and a rather similar criticism applies to the alternative technique of observing a man who is interacting with a group of other men (to assess the extent to which this individual makes himself understood and manages to understand the point of view adopted by the others).

3.4. Systems capable of learning In order to exhibit ‘learning’ as a cybernetic property of organisations, it is neces-

sary to represent the systems we consider in our il description in sufficiently abstract and general terms. For the present purpose the most convenient representation is Rosen’s (1958) wherein a system is conceived as a directed graph, such as illustrated in Fig. la.

Fig. la.

The directed edges of the graph correspond to mappings F, the nodes to the con- junction of the domain of a mapping and the ranges of some other mappings as in the Appendix.

Now this representation can be used to depict either the physical embodiment of a communication system %, in which case the several mappings in the graph represent physically distinct subsystems %i and coupling functions between these subsystems ;

Page 196: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 187

or, alternatively, it may be used to depict the communication system as an information structure Ea or .Yo without any particular physical referent; in which case the mappings are transformations of signs and the graph determines which sign transforming sub- system “4, ..”&, can provide an input to another Eta, .:. Communication entails the existence of cyclic subgraphs, like a and b in Fig. 1 b.

Fig. Ib.

The process of adaptation appears at the level of physical embodiment. It is for- malised as change in a parameter cpt (for subsystem Et) which selects a particular mappingf& from a set of possibilities Fi. Since the domain off& may be the domain of Fa or a restriction of this domain, adaptation not only changes the kind of function that characterises Et but also the momentary couplings between .Yt and the other subsystems (within limits defined by the system graph). All subsystems of interest are adaptive.

To pass from the representation of a physical embodiment of a communication system to the representation of a communication system, it is necessary to abstract special attributes of the physical states which are capable of acting as signs and sequences of which constitute utterances.

We comment that assertions about communication systems and their physical embodiment necessarily are assertions of a different logical type. Now in ,I we need to distinguish a pair of communication systems, A and B, mediated by the same physical embodiment; hence we need to consider a pair of abstractions, one of which induces a signs and the other B signs, Within each of the systems A and B communi- cation must occur and the ‘signs’ must have sensible characteristics, for example, if x is an a sign then it is reasonable to demand that x ‘means the same thing’ to all of the subsystems A that ‘speak‘ a and which happen to be similarly adapted; but, in order to preserve a distinction between a and ,i3 an a sign must not ‘mean the same thing’ to all B subsystems. Further, from inspection of the directed graph, the calibre of an utterance x, y , is a relation between an individual subsystem and its neighbours (for Xp, the set of signs that act as an input to E: is a product set whereas YH is a factor in a product set) and we insisted, in 3.2., that the a and ,i3 utterances remain distinct.

Now it appears impossible to offer an acceptably non trivial representation of the phrase ‘means the same thing’ in terms of the information structures A or B alone. Perhaps the least image which avoids the criticism that we are talking about a calculus rather than a language is ‘.x means the same thing to ..“$ and Ej if and only if, when y~ = yj, reception of x by Ei gives rise to the same state of e”i as reception of x by Referencesp. 212-214

Page 197: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

188 G O R D O N P A S K

Ej induces in Sj’. The criterion of similarity is an equivalence relation over the slates of the physical embodiment. It has the same perhaps arbitrary origin as the partition of states into the states of distinct subsystems and the construction of a directed graph.

It is not too difficult to constrain the system so that X u = ui(X:), Ya = ui (Yp) are defined (together with XD and Yb) , and to ensure that Ua n UP = Oproviding, of course, that the underlying structure of the physical embodiment remains invariant. It is now necessary to enquire what a translation would imply in such a system and how it might be represented.

The particular approach we have chosen is this; within the system as it stands translation cannot occur, but there is no reason why the a and /I utterances of coupled subsystems Ei u S, in A , for example, should not ‘mean the same thing’ as the utter- ances of coupled subsystems in B, such as St u SL, for these utterances have the form (u, u), rather than u and if the underlying states (say of Ei) are z in Zi the states of a coupled subsystem (say of Ei u Ej), have the form ( z , z) , i n Zi 0 Sj). Now to introduce a strong coupling, as against the existing concept of communication, is com- pletely arbitrary unless it can be justified apart from our desire for a translation mech- anism. We attempt to justify the innovation on the grounds that whenever an act of utterance is determined then it does not constitute an act of communication; for no uncertainty regarding the state of the recipient is removed by observing this utter- ance. Now it is possible to construct a system in which adaptation leads to precisely this deterministic condition so that, in the first place, groups of subsystems are im- pelled to communicate and then, as a result of communication, become so closely coupled that they can no longer be regarded as separate entities. The construction entails the idea of survival of the fabric or physical embodiment of the Ei which de- pends, by hypothesis, upon some commodity (represented by an index 8 (Ei)). If 8(%~) exceeds some arbitrary value 80, Si survives. I f not, Si does not survive and the di- rected graph E: is not defined; hence any realistic adaptation is such that it maxi- mises O(Si). Let 8 be a superadditive measure,

0(Et u Fj) > @EL) -1 H(EJ

and cooperation (or communication) is favoured; the rationale of this being that outcomes in Zi 0 Z, which are consistently achieved if and only if the state selections are jointly determined, yield higher values of 8.

The proposed mechanism of translation, and by hypothesis of learning, is of this kind. Given an adaptive system the originally localised subsystems become distributed, i.e. Ei and Ej coalesce to form Ei u Ej. Some cx utterances u in use by Ei come to ‘mean the same thing’ as a utterances (u, u) in use by the distributed system. A similar process occurs in B to produce some b utterances u that are identified with /3 utterances (u, u) and at this level identification may be (and we assume in a particular system that it is) possible. In this case, because of the original identities between u and (u, u), a translation from u E U a into u E Ufi is achieved. The underlying physi- cal process which, as it were, ‘drives’ the system is evolutionary. On consideration it is manifest that some subsystems will fail to survive, hence in reality there must be a replacement or reproductive process. Hence translation involves construction of a

Page 198: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 189

metalanguage (since functions defined upon [Ei u 8j] u [Ek u 8 1 1 are functions of the original functions).

We conclude with a few comments: (I) A subsystem is itself a set of utterances. (11) To translate these utterances, say from /I to a, is a reproductive process. (111) Rosen (1958) has argued that self reproduction is self contradictory. ‘Repro-

duction’ uses a distinguished system such as ‘the environment’ to act as agent in this process.

(IV) In the present case the ‘environment’, which is also a prerequisite of efficient reproduction, is represented by the distinct species. Thus Et and 8j in A jointly act as the environment which is an agent in the reproduction of Ek in B.

3.5. The mechanism of learning Although we have concentrated upon situations in which learning behaviour is

manifest, the argument applies to any organisation whatever. It applies to organisations involving within a man’s brain, for example, or in an artifact; and to obtain much the same result as we obtain by experimenting with the exteriorised behaviour.

There are several very elegant models in which organisations evolve. Wiener’s mathe- matical conception has already been cited; Caianiello’s (1961) model is ideally adapted to this field of enquiry; in each case stable modes of oscillation are the entities which evolve in the environment of some malleable network, brain or artifact, and which we should identify with evolving organisms. Similarly, there are sets of stable modes which correspond to the species of evolving organism.

Instead of reviewing these concepts I shall do the less ambitious job of describing a small and in many ways illegitimately simplified model which has been constructed. Tested according to our criteria it does behave as a ‘learning mechanism’ and could, conceivably, be identified with processes in a real brain. The model arises from the proposition that learning is the characteristic behaviour of an evolving species of sub- system in an evolutionary system. A very similar idea was advanced, but differently developed, by Pringle (1951). The particular case we have examined is the evolution of subsystems that are self organising systems: systems with an always positive rate of change of redundancy (Von Foerster, 1960; Beer, 1962). Since this is a model rather than an observed assembly, it is constructed in A.

4. E V O L V I N G O R G A N I S M S

4.1. Evolutionary systems Evolutionary systems have been considered in detail in other papers (Pask, 1958,

1961a,b). Briefly a system E is evolutionary if: ( I ) the states of 8 are partitioned into states of evolving subsystems Et and of the

environment in which they evolve (called the internal environment to avoid confusion) ; (2) there is a competition between the evolving organisms Et for some restricted

commodity (food, energy, variability, specific metabolite) which is necessary for their survival (we can think of the structural fabric of the organism decaying unless the References p . 212-214

Page 199: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

190 GORDON P A S K

process is countered by a synthetic activity and suppose that the organisms compete for whatever commodity is needed to maintain this activity) ;

(3) the subsystems Ei of E reproduce their organisation in order to persist. Hence any subsystem or any coupled combination of subsystems is an organisation. We shall not be concerned with specialised reproductive mechanisms like sexual repro- duction and it will do no harm to envisage only one reproductive process, namely interaction between a structure which is synthesized from raw material by a controlling process and this controlled activity. Thus activity induces the structure, whilst the structure implies the organisation which induces the activity needed in order to build the structure. Hence:

STRUCTURE % ACTIVITY

(4) reproduction entails replication with occasional variation. Commonly there are many structural templets L in a set 9 that will induce the same activity R or an activity that is equivalent in the sense that it belongs to a set 9 such that all R E 9 induce some L E 9. Hence the reproductive mechanism is stabilised by a redundancy of structure and function which is represented by a many to many mapping from 9 to W and W to 9. In other words at any moment, t , there will be one particular structure LO E 9 which may induce any R E 9 at t + 1. Suppose RO is induced at t + 1, any L €9 may be induced at t + 2.

STRUCTURE W Y

Y

Thus giving on average:

Y CX). I I

Fig. 2.

The organism that is reproduced is the mapping of 9 % W* and the elements of these sets. It is convenient to denote it G and to write L(G) C 9 as any structural component of Gand R(G) C 9 as any dynamic component of G.

However, neither 9 nor 9 are invariant. To be precise, we should have written 9 (8) and W (8) where 8 is a function of the commodity needed for survival of a structure or the maintenance of activity. Thus for 8 = 1 and 8 = 2, we might have: Variants arise due to changes in 8 (changing 2 (8) and W (8)) or due to aberrant transformation like :

* To simulate a viable form of G we choose some characteristic 7, such as a distribution of activity over the entire system, that depends upon the existence of 9 and 9 and control 0 so that survival of an organisation is conditional upon its satisfying y.

Page 200: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 191

STRUCTURE 1 m Y

Fig. 4.

or (in a way we shall consider in a moment) due to interaction between a pair of reproductive mechanisms. The important point is that variation is the only randomness in the system and it is unrelated to the randomness of dice throwing or haphazard connectivity that appears in a number of adaptive models. The many to many form of 2 + 92 and 9 +S? has nothing to do with ‘chance’ although it may lead an observer to adopt a ‘probabilistic’ image of the system.

(5) there are constraints in the internal environment that favour the development of specific variants, in this case, of any self organising system;

(6) amongst these are constraints that favour correlated activity amongst a set of organisms realised by a communication between the organisms in this set, essentially, Bonner’s (1 958) condition for multicellular organisms, and organised populations ;

(7) there is a relation of identity between certain organisms defining species of organisms ;

(8) the organisms are ‘built to survive’ that is, they are homeostatic and ‘ultrastable’ (Ashby, 1956) (they adapt to maintain the possibility of homeostasis in a changing internal environment by maximising 0).

4.2. Invariance of an organisation Identifying a brain with a system such as %, leads to a picture in which the compu-

tation performed by the brain is incidental to the basic issue of maintaining the fabric. This is a very reasonable picture for the salient feature of a brain (or any other living assembly) is that unlike a computing machine, it cannot be ‘turned off’. If it were ‘turned off’ by accident, the physical structure would decay and without sufficient metabolic activity it would decay in a less dramatic fashion. The metabolism is simply the energetic side of a reproductive process which preserves organisms within the brain (regarded as the ‘internal’ environment in which these organisms evolve) or preserves the organisation ‘brain’ itself.

4.3. The origin of the indeterminacy associated with learning Within this framework the apparent indeterminacy of a learning mechanism has a

plausible origin. Assume that the signals that are supposed to represent information in an operational language, say a, are identified in 1 with the changes of state consti- tuting the activity component in a reproductive process :

STRUCTURE % ACTIVITY

References p. 212-214

Page 201: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

192 G O R D O N P A S K

The integrity of an informational image depends upon maintaining a distinction be- tween signals and their transformation (or the existence of a transfer function for the entire network or a component, such as a neurone). But the transfer function is determined by the structure. However, the structure is maintained by a reproductive process which entails the flux of activity at least some of which we are anxious to call signals. Hence signals and structure cannot be isolated. Their indices are conjugate variables. Further, the mapping ACTIVITY -+ STRUCTURE and the mapping STRUCTURE

+ ACTIVITY are many to many. Thus we could obtain a partial but consistent form of description of an organism G in 1, in terms of a probability distribution over the set of structural components of G, but to describe the form of organisation completely it would be necessary to have access to a 1 description as in 3.2. or in the Appendix.

4.4. Ident$cation of many evolutionary systems in a brain Single evolutionary systems, for which it is not difficult to specify a complete de-

scription, can be programmed on a computor (MacKay, 1958; Pask, 1959). But a brain is a physical assembly which is identified with a vast, perhaps an indefinite number of different evolutionary systems. Each one (or, in some cases, each subset of these) can be identified with an operational language, a, p, (in a sense we shall illustrate in a moment). But the evolutionary process in the assembly does not consist of the separate evolution of these systems ; on the contrary, interaction occurs between the systems - when the interaction occurs within a or within t9 it has the calibre of information flow, but when it occurs between a and fi or b and a, it has the calibre of an identijiication, which, according to the argument of this paper, is the process which distinguishes learning from adaptation.

5. P H Y S I C A L M E C H A N I S M S

5.1. Particular model The model we shall discuss has been realised as an artifact (Pask and Lewis, 1962).

The logical constituents are a pair of variably coupled evolutionary systems having a common internal environment and identified with operational languages a and f l such that a signs are one attribute and signs another attribute of the same physical event (a unit impulse or an action potential). We assume that an observer is impelled by convention, or sheer conceptual necessity, to identify the internal environment of the system with a rigid framework (which we take to be a network of real or artificial neurones), and is anxious to assign definite locations to the physical events and to regard one or another abstraction of events as a signal. One result of this descriptive method is an indeterminacy, which manifestly has a conceptual rather than a factual origin.

For convenience, I shall tentatively identify parts of the system with parts of a brain, using correspondences that stem from a couple of physiological hypotheses. (In some respects, the working of the system plausibly fits what goes on in a brain.) But the main purpose of identification is to yield a vivid picture. Indeed, (1) the system could equally well have been identified at a sociological or a psychological

Page 202: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 193

level (2). Proof or disproof of the physiological hypotheses would have little effect upon the system, which is a cybernetic model, with a physical cogency that stems from the behaviour of an isomorphic artifact.

5.2. Internal environment We assume the conditions of 4.2., some of them in a specialised form. The internal

environment is a network of malleable (or adaptive) neurones denoted a. They may either be real or artificial, but in the latter case their design must embody certain elaborate characteristics of real neurones that will appear in this discussion. Full con- nectivity would be admissible but impracticable. It is however, important that the neurones be in cyclically fibre connected groups denoted A , wherein neurones compete for a commodity indexed by 6. In a real brain the 8 competition may entail a mutual inhibitory mechanism which does involve fibres or it may depend upon diffusion of a metabolite, or it may be due to the transfer of metabolites from the glial.cells surrounding the cortical neurones.

There are a pair of energetic constraints : (1) Any region of the network is constrained to manifest a minimum activity, and

is physically restricted to a maximum activity dependent upon 6 (this condition ensures that neuronal reproductive mechanisms are ‘driven’ by undifferentiated energy).

(TI) Any neurone is required to maintain a minimum metabolic level and is limited to a maximum level dependent upon 6 (this condition ensures that intracellular re- productive mechanisms are ‘driven’ by undifferentiated energy).

Recall that 8 is an index of the rate at which structures can be synthesized and maintained. We shall be concerned on the one hand, with relatively macroscopic structures (plastic changes at synaptic connections), and on the other, with micro- scopic structures (molecular synthesis). Hence the physical restrictions indexed by 6 differ greatly from one case to another.

Again we posed in-4.2. the objective of survival and the possibility of decay. There is no question of neurones decaying as such. But there is plenty of evidence that neurones may be more or less reversibly oblivescent, that ‘plastic’ changes can literally decay, and that a molecular species must be maintained by specific synthesis.

The physical events appearing in the internal environment are impulse patterns of which the spatial attributes are a signs and the temporal attributes, ,!? signs.

5.3. Memory systems, learning mechanisms, and the evolving organisms First, let us consider the evolutionary system over an interval so brief that it has

little chance to evolve. In this relatively static picture it will be possible to study the relation between an a system and a ,8 system and the processes of information flow, computation and identification.

Within the static picture evolving organisms which are later to be identified with learning mechanisms appear in a degenerate form as ‘memory mechanisms’ or systems that reproduce organisations called ‘a memory’. Each organism is a subsystem of one or another evolutionary system, the a or the ,!? system (we do not suggest that a real Refermces p . 212-214

Page 203: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

194 G O R D O N P A S K

brain has only two systems, probably it has an indefinite number; but two is the least number required to exhibit these principles).

5.4. The a system We assign an operational language a to an evolutionary system which is similar to

the resonance structure considered by Pringle (1 951). This assignment must, of course, be justified later by examining the relation between this and other systems to ascertain that it does define the relation between a language a and other languages.

The evolving organisms are reproduced by interaction between a spatially dis- tributed pattern of activity in the oscillatory neurone network and a structural templet consisting of plastic modifications of the synapses of some of the neurones a in a group A , for example, the neurones, a,, a,, . . . . , . . . . . a i l , in group A1. Recalling the driving condition of 5.2. the activity amongst the neurones of A1 will depend, in the absence of impulses received from another group, Az, upon the structural templet of A1 and any pattern that persists, depends upon a dynamic equilibrium achieved by the reproductive mechanism :

1 1

Impulse pattern + Plastic changes

The organism G, which is being reproduced is a relation between impulse pattern and plastic change, is denoted GP, the superscript for the operational language and the subscript for its identity.

Equivalently, GH, = 9: + 92:. Hence Gr has dynamic components R (GP) C 2:, which are not localised in the network and structural components L (Gr) C 2; which are localised within the region A1 of this network.

Neurones in A1 are linked by parallel fibres to other regions like the group, Az =

a,, a,, . . . . . . . a:,, and since any dynamic component R(G;) is a pattern of impulses it may be transmitted by this parallel fibre arrangement to A2.

Now depending upon the state of A Z (namely which GH are being reproduced at A z ) these impulses may induce some ~f and later GP at A Z because of which we call R(GP) an a sign for Gf (it is neither necessarily nor commonly a unique sign for GP). Hence organisms in a may be reproduced at a given location or transmitted through their dynamic component to induce a remote reproductive mechanism, for the same organ- ism. a Signs are the dynamic components of organisms and are the spatial pattern attribute of certain physical events.

Thus, as a consequence of describing the internal environment in I as a petwork and distinguishing an a system:

(I) All a organisms GY are associated with a reproductive mechanism which, at any instant, has a definite location upon some group, A1, of neurones.

(11) a Signs are dynamic components R(G:). (111) The set of a signs is determined by the spatial pattern attribute of physical

(IV) Only a signs have ‘meaning’ (well defined operational effect in a). They are

2 2

impulse sequences.

transmitted by a couplings of appropriate (parallel fibre) capacity.

Page 204: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 195

5.5. The system We shall assign an operational language ,8 to an evolutionary system proposed as

a memory mechanism by Hyden (1960) as a result of his experiments upon the changes in RNA concentration in stimulated neurones. Similar mechanisms have been sug- gested by Polonsky (1961) and Barbizet and Albarde (1961). We assume that the frequency (or a comparable attribute) of a temporal sequence of impulses is able to modify some of the RNA molecules produced by a neurone so that a correspondingly modified protein is synthesized. We further suppose that this modified protein (which will only be available in neurones that have previously received a stimulus at this frequency) is specifically sensitive to this stimulation frequency and dissociates, if the frequency occurs upon a subsequent occasion, to yield acetylcholine or some other activator. Hence certain neurones embody ‘specific receptors’ denoted b for specific frequencies of impulse sequences and since the frequency is represented by a modifi- cation in RNA synthesis a neurone a may include vast numbers of ‘specific’ receptors, b( l ) , b(2), . . . . . . . . b(m) for different frequencies. Hence ,8 signs are frequencies or temporal attributes of an impulse sequence, in particular, a ,8 sign is that frequency able to induce a specific receptor b. An organism Gr in ,8 is maintained by a repro- ductive mechanism, which recalling the ‘driving’ condition of 5.2. is represented by the double cycle:

, -RNA template/- -\spK,flc impulse

L/ Am& acids >eciflc4// frequency

1 p m ~ ~ m p u r s e activator

which might be formulated: G : = 2’:+ 2’:+ 9:

which can be reduced to: P P Gj = 9 j + 9:

if we keep in mind the fact that the structural components L ( G ~ ) are automatically perpetuated (due to the persistence of the RNA modification) and consequently are likely to be distributed (any neurone having a specific receptor will retain it with a minimum of oscillatory impulse reinforcement). Hence the location of the structural component of Gf in the network at any instant is likely to be a set of specific receptors,

which will be very large. Thus, as a consequence of describing the internal environment in A as a network

and distinguishing a ,8 system: (I) All ,8 organisms Gr are associated with a reproductive mechanism which is

distributed (since its structural and its dynamic components are distributed) being located only in a set B with specific receptors b in most neurones.

(11) ,8 Signs are dynamic components R(G~) .

References p . 212-214

Page 205: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

196 G O R D O N P A S K

(111) The set of @ signs is determined by the frequency attribute ofall physical impulse sequences.

(IV) Only p signs have ‘meaning’ (well defined operational effect) in p. They are transmitted along any coupling which is any fibre of appropriate (frequency limiting) capacity.

5.6. Identijication between a and p systems The physical events, impulse sequences distinct in A, have an a and a /3 aspect.

Further, although an a sign has no ‘meaning’ in @ any sequence of a signs implies a frequency of impulses at some point in the network which may be (and by a suitable choice of parameters, almost certainly is) a @ sign. Thus, the flux of a signs generates ,!I signs but the @ sign (or frequency) generated by a given a activity will normally differ at different points in the network. Conversely, many different modes of a activity will induce the same p sign at a given point.

The signs have lost the property of being signals and bear a closer relation to Greene’s percepts.

We have argued that a peculiar, location dependent, many to many mapping exists from a to p. Conversely, since any set of frequencies induces a spatial distribution, there is a comparable relation between /I and a.

The immediately important point is that our assignment of operational languages, a, p, to a pair of evolutionary systems appears to be sufficiently justified because the relation between a and ,8 is exactly the many to many, A indeterminate, and organism dependent relationship we discussed in 3.3. The doubtful part of the argument is the property of organism dependence; to exhibit it notice that the frequency induced (as a p sign) by a sequence of a signs depends upon the position in the network at which @ sign is specified ; similarly the impulse pattern induced (as an a sign) by a collection of /3 signs depends upon the set of points specified as recipients. Now the relation between operational languages, which, we assert, depends upon the organism is ‘trans- lation’ or ‘identification of sign sequences’ and since any a organism has a localised structural component, position dependence secures this property for any p --f a trans- lation. However, a @ organism has a distributed structural component. Hence the property will apply for a -+ /3 translation excepting, perhaps, for the case of a com- pletely distributed /l organism.

Identification appears in our system as a process whereby a set of j3 signs comes to ‘mean the same thing to’ a certain a organism say Gr in a region A1 as a given a sign, say R(G~) which is one dynamic component of an a organism Gja in a region A S . The phrase, ‘meaning the same thing to’ GY implies that in a A description the same change of state is induced by a given sequence of @ signs as was previously induced or is coincidently induced by the a sign R(G,‘). In other words, a transformation like:

v

Fig. 5.

Page 206: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 197

is induced either by an a sign or a set of previously incomparable B signs. The same comments apply to the identification of an a sign.

6. T H E E V O L U T I O N O F O R G A N I S M S

6. I . Organisms as decision makers In the relatively static picture we have considered so far a group of neurones, say

A1, acts as a decision making system which, at a particular instant, resembles a finite automaton with a transfer function, determined by Gj, and realised as an object in the sense of 3.2 and 3.3 say ;

F ; (X," 0 Y,") + Yl"

where X : is the set of a signs distinguishable at the incoming fibres of A1 and Yy is the set of a signs distinguishable at the A1 outgoing fibres.

But it is important to recall that the substance of the organism is the reproductive process for GI" and not the particular neurones that happen to mediate this organi- sation. The organism sits at A1 and uses the A1 neurones as its body, perhaps only for a moment, perhaps indefinitely. The grouping condition of 5.2. secured a number of positions in the network of neurones where organisms could sit comfortably. Thus a feasible A structure if 4- implies a fibre connection and . . . . . . . a 8 dependence and o a neurone, would be

x; . 6inOFy stofes

of these fibres

v; *

Binary stofes

Of these

' fibres

J. I and any a organism is an organisation involving loops like a1 % uz or a1 -+ a2 + a3 + at? and plastic changes at the synapses of these neurones.

However, the A1 neurones also mediate parts of B organisms since each neurone is likely to embody several specific receptors, b. We could, though with less practical point, define a distributed B automaton with a transfer function

where X: is the set of B signs appearing in all b E B and the Y: is the set of /3 signs produced by the neurones a that embody these specific receptors.

From the engineering point of view any neurone can be regarded as a dual purpose mechanism which includes a number of functional parts (Fig. 7). The incoming fibres provide impulses which are filtered for their spatial component by an a filter (the 0 input to this box is a plausible innovation) and give rise to an a

References p . 212-214

Page 207: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

198

Pattern

G O R D O N P A S K

From neighbouring To neighbouring

d-decision units d-Fi l ters c B I 1

A-

FILTER

<Specific receptors - b of which there a r e mony in real neurones

Fig. 7

decision (plausibly a state change that exerts some 8 effect upon the neighbouring a filters). The incoming impulses are also filtered by a /3 filter for their frequency com- ponent and give rise to a p decision involving a specific receptor which is shown as a pair of coupled boxes. The neurone produces an output impulse which may be part of either an a sign or a p sign and the decision to emit this impulse depends upon the weight Va, Vo, given to the a, /I, decisions.

6.2. Interaction with evolving organisnis Hence, any collection of neurones is potentially the location of an a or a /3 organism,

if it takes part in an a reproducing or a /3 reproducing process. The common element in these processes is an impulse which is the physical event we distinguish as a sign in A.

Suppose we have evidence to suggest that a certain group of neurones is the location of an a organism we can interact with it in the operational language a by introducing stimulus and recording electrodes and correlating the spatial or a sign characteristic of its input, with any spatially patterned stimuli we introduce (Fig. 8).

Neurone Network

&Language lnteraciion

Select Frequency

Frequency Filter

Autocorrelation Frequency

Random Location

. - Neurone Network r

Fig. 8.

Page 208: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 199

At any moment any neurone may be more or less a part of an a or a B organisation. Its status jn this respect is chiefly a function of Va and Vp which, however, we must assume to be variable. Now we have already asserted that a neurone is, in a restricted sense, a 8 maximising device and it is no innovation to remark that if the neat boxes did exist (in real neurones, at any rate, they do not), then one of the 6 maximising adaptations would be a variation of Va and Vp.

Hence, suppose we a interact with the neurone group at region A1 using such a correlation function in our external feedback loop that 8 is maximised (and if we do not our interaction cannot persist) then the A1 neurones will be induced to increase V , and behave as part of an a organism. A similar comment applies in the case of a stable, 8 maximising, /3 interaction when the A1 neurones will be induced to behave as part of a /3 organism.

This is the dilemma of any conversationalmethod for observing a system. But unless such a definitely a interaction or definitely interaction is stabilised, the organisms in the system will have a and /3 aspects. To some extent, we can regain determinacy and the chance to know, for certain, the implication of a sign, by talking, as we have done, about an a organism identifving the dynamic component of a /3 organism as a /3 sign, or vice versa, but this expedient has doubtful value whenever it becomes necessary to admit structural a and ,!? interactions. In fact, the commonest organisms are linguistic hybrids, which is why the system is difficult to observe in any other than a conver- sational way, that is by changing from a to ,!? as necessary, in order to maintain the discourse.

6.3. A self organising system In 5.2 we defined the internal environment as embodying a set of constraints which

would favour the evolution of an organism if it was a self organising system, in other words, only a self organising system would receive an adequate amount of" 8.

Consider an a decision maker in the sense of our previous discussion. It will be a self organising system if and only if the rate of change of the redundancy of its be- haviour is positive; namely, if and only if A redundancy (x, y ) / A t > 0.

In a variable internal environment a 8 maximising adaptive system can behave as a self organising system in this manner; adaptation will lead to a transfer function which maximises 6, but before this occurs, the environment will change, requiring a different kind of adaptive modification. Thus, in a certain sense, the system is always adapting; and the redundancy of the behavioural outcomes is always, on average, increasing.

This analysis is tempting but spurious because it relies upon redefining the set X and hence redefining the 'system' (Beer, 1961). For a given set X(name1yin aninvariant or stationary environment) the system (with behavioural outcomes X 0 Y ) will adapt t o maximise 8. The redundancy will increase for a while but will eventually become constant. At this point the system will no longer behave as a self organising system and will no longer be favoured with an adequate value of 8 . Since the physical structure of the organism depends upon 6 it must become a self organising system in order to survive. Refrrences p . 212-214

Page 209: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

200 G O R D O N P A S K

6.4. The mechanism of evolution To become a self organising system the domain of the transfer function, which is

a set X I 0 Y1 o f behavioural outcomes must be transformed ( X I 0 Y1) + ( X Z 0 Y z ) or (XI 0 Y1) + T(X1 0 Y I ) where T is defined as a transformation that maintains

A redundancy (x, y,)/ A t > 0

or equivalently, writing 8 and T(%) that maintains

A redundancy (E) /A t > 0

Hence, although no system E with an invariant domain can be a stable self organising system, the disjunction E* where

Y=R - I - * = u [Tr(E)] where r = 1 , 2 . . . . . . . . . R

r =O

is a stable self organising system if it exists. We call 8* the evolution of E and comment that its existence is guaranteed by the

physical constraints in networks such as Caianiello’s (1961). But the evolution can be either of two kinds.

( 1 ) The set of behavioural outcome states is transformed or evolves due to a corre- lation between the activity of a larger set of neurones. In this case the a organism expands to occupy a larger volume in the network. The mechanism of evolution is transmission of a signs (say from A1 to a neurone group, harbouring a similar activity like Az) . The result is resonance between the organism located at A1 and Az to yield a system with outcome states ( X : u X : ) 0 ( Y ; u Y i ) . Let us call this regional evo- lution.

(2) The set of behavioural outcome states is transformed by the inclusion of /3 states whereby the organism becomes equivalently capable of identijying ,!? signs or an a, p, hybrid, or in the limit case, a definitely /3 organism. The mechanism is a change in the weights Va and Vo (increase in Vo/ Va + Vo) of neurones only some of which will ordinarily be in the group A . The resulting organism will have a set, ( X : u R, (X,: @ ( Y f u R ( YB) of behavioural outcomes, but (since X I and X B and Y1 and Y S are probably connected with entirely different neurones) this assertion has little practical importance. The process, however, is important and we shall call it linguistic evolu- tion.

6.5. The necessity for evolution It is not difficult to show that if the evolutionary system is left alone, the organisms

will evolve in the regional sense of (1) and by the linguistic evolution of (2). The argument will not be given in this paper but we comment that the result is a con- sequence of the general principle that one evolving organism will exist in an internal environment increasingly determined by the other organisms of its own species.

6.6. Learning related to evolution Later in 8.1. and 8.2. we shall contend that the regional or the linguistic evolution

of an organism has the characteristic required for mechanisms that mediate various

Page 210: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G BEHAVIOUR 20 1

kinds of learning behaviour. Before developing the point it will be worth while considering a related case that illustrates the enormous generality of this pro- cess.

7. DIGRESSION I N T O P H Y S I C A L ASSEMBLIES

7.1. Solving an undecidable problem Consider any automaton able to receive evidence from a set of possibilities U and

to select actions from a further set of possibilities C. Let the automaton depend upon 8 for physical survival, and let it be programmed to maximise the 0 available in its environment with a transfer function f; ( U 0 C ) --f C which is often called a decision rule. Suppose that this automaton is presented with an undecidable problem; in other words, given certain evidence uo E U and having selected co E C, the decision rule specifies no change of state, which will increase 8 yet uo remains invariant and in UO, C O , the value of 8 is decreasing. Without further specification the automaton is unable to survive. It must remain in UO, C O , until it decays.

The organisms we have considered, are momentarily like the automaton. Their strategy for avoiding an undecidable and destructive situation is to evolve into another organism which may be able to solve the problem.

The present automaton could also be designed with this strategy in which case its response to an undecidable and destructive situation would be U + U*, U C U* (to obtain novel evidence on the basis of which to decide) or C + C*, C C C* (to select different actions in an attempt to elicit further evidence). The fact that most automata are not designed like this is due to the form of evidence commonly presented, namely the values of one or a limited number of continuous variables. The set of possibilities must still be increased in order to solve the problem but when the environment has this very special structure it is possible to invoke the doctrine of chance events, to throw a dice and select amongst the existing set of possibilities by chance; for these possibilities define the environment completely.

7.2. Decision in unstable equilibrium Let us now reduce the choice situation to its most rudimentary form, namely a ball,

balanced on a pin, and able to fall in various directions. Suppose the point of the pin is progressively narrowed so that the equilibrium becomes increasingly unstable.

Now this rudimentary situation is ‘undecidable’ in the sense that at any moment when the ball rests on the pin there is nothing to ‘predispose’ it to one direction of falling rather than to another; otherwise it would have fallen. As the pin is narrowed, the ball must eventually topple over, when we assume that it is replaced and that the process is repeated.

The event of toppling over can be described in two different ways: (1) As an event determined by some physical variable or combination of variables

such as a breath of wind or a magnetic field. As the pin narrows an increasing number of these variables exert sufficient effect upon the ball to determine its descent. Hence narrowing the pin has the effect of increasing the set of significant variables, that is, Relerenres p . 212-214

Page 211: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

202 G O R D O N PASK

of variables which significantly determine the outcome and of which there are an indefinitely large number.

(2) As a matter of chance, for we are ignorant of, or indifferent to the determining variables.

Ordinarily, either ( I ) or (2) is acceptable. But obviously it would be essential to use (1) and not (2) if the ball could somehow profit from falling in one direction, say X and if being pushed by the breeze determined its descent into X when the pin was less narrow and the system more stable, and if, as a result, the ball adapted to become more sensitive to the breeze than to the other variables. This adaptive process is distinct from adaptations that select a ‘value of a variable’, such as ‘the strength of breeze’, which may also occur.

A comparable selection from the states of an indefinitely large number of possible variables takes place in biological evolution to determine, for example, the sense modalities of organisms. Trivially, in the evolutionary system we have considered, or non trivially in the maturation of a biological brain, the selection is made, instead, from forms described in an indefinitely large number of possible languages.

We are now able to relate entities in the evolutionary system to learning mechanisms in a very general way indeed. Having demonstrated the diversity of systems capable of embodying these principles, we need no longer restrict our attention to a pair of operational languages. The evolutionary system, when realised in a physical assembly, has a special and important ‘analogue’ property (MacKay has pointed out a rather similar property in other systems). Conventionally we are accustomed to analogue devices that select from an indefinite number of values of a variable (the device works with continuous variables). The physically realised evolutionary system selects from an indefinite number of variables or an indefinite number of operational lan- guages.

8. D I F F E R E N T K I N D S O F L E A R N I N G *

8.1. Forms of learning mechanism The limiting case of a learning mechanism is any system with adaptive properties. These are a prerequisite of learning or evolution. Consider discourse in a with the

evolutionary system we have discussed and suppose that the organism or set of organisms involved in the feedback path of the a signs of the discourse is in a condition where equivalently: (1) it must evolve in order to survive; (2) it is no longer a stable self organising system; (3) it is presented with an undecidable problem; (4) it is un- stable like the ball on a pin.

Then evolution occurs which will be manifest in the a discourse as a learning be- havi our.

If regional evolution takes place the possible behaviours are stimulus or response generalisation or the development of a body of related experience; knowledge could be accumulated in terms of different orders of conditional probabilities of events or

* See also the Appendix.

Page 212: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 203

existing constructs; but no organisation other than the organisations predicated in the a system can occur and hence no essentially novel concept or relation can arise (for even variation in the system which may be interpreted as chance trial learning, induces organisms predicated in the a system). Briefly, then the process of regional evolution implies a mechanism that can account for inductive learning (and, by extending the model to include its response selection) inductive inference in the sense of Braithwaite and Popper (1962). It may also account for change of attention within a field made up of similarly ordered objects. The apparent indeterminacy entailed in the learning process is due to the intrusion of the ,6 signs (or the hybrid character of the evolving organisms) that (in many systems) is also the chief progenitor of variation.

If linguistic evolution occurs the resulting behaviour can manifest originality pro- viding that the system is conceivable to whoever indulges in the discourse. a Signs are identified with a set of ,6 signs and vice versa. The evolutionary level at which identiJication of a and ,6 signs is deemed to take place depends upon his discrimination and is not otherwise demarked from the hybrid interaction which is discounted as irrelevant. So, to use a rather inadequate analogy, some a signs which appear ‘hazily valued’ in the discourse will at some evolutionary level, assume ‘sharp values’ as a signs identified with sets of ,6 signs.

The mechanism of linguistic evolution can give rise to novel concepts and relations, for example, the relations involved in constructing the conceptual hierarchy of a hypothetic deductive structure in the sense of Braithwaite and Popper. It can account for attention to previously incomparable objects or entities. Finally, insight is one behavioural consequence of such a system. The flux of a discourse is likely to bc, perhaps certain to be, mapped into the interaction of ,6 organisms where it remains as an inaccessible ‘hazily valued’ p representation until linguistic evolution takes place and sets of ,6 signs are identified with a signs and become ‘sharp valued’. Before this occurs, the data may have been modified in its /3 representation and, if so, we should say that this identification was insightful. Hence, according to our original criteria, learning, i n this system, entails linguistic evolution.

8.2. Relation bet ween organisations In conclusion we should stress the physical characters of this system. Perhaps we

have insisted often enough that any organism is designed to survive as a physical entity; that its ‘decisions’ are made to maintain its fabric and that the abstraction, 8 is only a convenient index for whatever energetic conditions are needed to maintain and construct its fabric. On these grounds alone the form of organism that evolves must depend upon the physical assembly that embodies the evolutionary system. Without this material dependence* our conclusions would be limited to systems which literally had only an a language and a ,6 language.

But the physical character of the system plays an equally important part in de- termining the relations, in particular, the similarities, that exist within the system, and

* The appropriate abstraction for such a materially depcndent system is an infinite automaton,as discussed, for example, by Loefgren in a recent paper (our evolving organisms are special and rather intractable cases of infinite automata). Relerences p . 212-214

Page 213: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

204 G O R D O N PASK

in maintaining a correspondence between these and the relations that pertain in an external environment that is built of the same material (MacKay, 1958; Beer, 1961).

In terms of evolving organism there are relations between individuals and their ancestors and between species (the relations required in 4.4. and realised by the material constraints). Corresponding to these are relations between forms of sign in a given operational language and between operational languages ; physically in the a, 1, case, these are realised as relations between spatial patterns or between frequencies and relations between spatial patterns and frequencies. Finally in discourse there are relations between concepts of the same kind, and concepts of different kinds.

The material dependence of a brain or an artifact suggests that manifestly coherent experience stems from a cohesive physical or intellectual environment of which thc brain is a specialised part.

A P P E N D I X

Comment upon algorithms In the discussion Professor Rashevsky suggested that the Markovian theory of

algorithms could be profitably applied in connection with this learning model. A tentative approach will be outlined, since the method seems to be promising.

We interpret an algorithm as a computing procedure which has conceptual integrity (that, in some sense, E could reveal the algorithms it uses whilst remaining ignorant of their stages of application, though each stage entails some mechanical change). Conceived in this way, an algorithm is a process which reduces E’s conceptual un- certainty (given x, y , and x about the next y) .

Algorithms q act upon a vocabulary which is some sequential representation, t of Xa to produce words in t ( X a ) or otherwise to produce words in t ( Ya). Given (xo, yo), 9 selects some algorithm q E U(p) where all q E U(p), either: (I) satisfy the function .f9, selected, at this instant, from Fa in the sense that q(t(xo)) = fm(xo, yo); xo EX^, yo E Y a ; or (11) are the first member of a finite sequence of m algorithms

qm {qm-1[ . . . . . . . q 1 [ t (xo) I . . . . I} = fdxo, yo>

where any q , [ z ( x ) ] ~ t ( X a u Ya) fo rm P y 3 1.

‘Conceptual’ uncertainty is envisaged in terms of the average number v(p) of algo- rithms which are required by E, given X O , yo, to satisfyfv E Fa. In other words, we interpret the average length of sequence of algorithms that is needed to ‘solve a problem’ as an index of the uncertainty engendered by ‘posing’ this problem and learning is conceived as a process which maintains the rate of reduction of v within limits, say, 5‘1 3 -dv /d t 3 E z ; 5‘1 > 5 2 > 0; (hence E is a self organising system). Consider the possibility of interaction between this ‘conceptuel’ level of index and state indices at a mechanical level; in particular, that a control is exerted to stabilise - d v/d t , by adjustment of these ‘mechanical’ parameters.

We have distinguished certain underlying mechanisms of learning, namely, ‘adap- tation’, ‘regional and linguistic evolution’. These occur in a medium or ‘environment’

Page 214: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 205

(such as a brain or artifact) wherein the rate of activity must, on average, remain constant. Adaptation is a mechanism whereby repeated events ingrain structural patterns; in a spatial distributions of synaptic impedances and in t3 more or less ramifying, frequency sensitive pathways. Given the mechanical particulars a pair of indices of the degree of adaptation are calculable; say aQ and ap. If cQ is high then y --f PO, whilst 8 is embodied in the particular part of the medium (or internal environ- ment) for which aQ was calculated.

Regional evolution is a process whereby activity correlations increase the volume of the medium that, at any instant, is embodying 8; once again, given the mechanical particulars, an index of correlation is calculable; say e. The higher e the more elaborate the mechanical concomitants of any ‘conceptual’ event (the application of an algo- rithm 7).

Now adaptation implies a restriction upon the range of the available algorithms ; its effect is unspecific but, in general, since adaptation implies y --f yo and since v ( p u PO) > v(y0) the result of adaptation will be a reduction in v and increasing the rate of adaptation will increase - dv/At .

At the conceptual level, decrease in p(y) implies a composition of algorithms; the selection of q* [z(x) 3 = q l [qz [z(x) ] ] in place of q 1 after q ~ . Composition increases the number of members of U that have been used at least once. Let y(q) be the number of stages (which do correspond to mechanical events) entailed by the appli- cation of q. Commonly, y(q*) > y(q1) + y(qz) as in Rashevsky (1962). We shall interpret the process which provides the mechanical elaboration needed to achieve composition with regional evolution (notice, that, in addition, the regional evolution also shifts the embodiment of 8 to some other region of the medium with a possibly lower value of aa; hence a homeostasis).

Finally, let us identify linguistic evolution with the augmentation of a vocabulary. As in Rashevsky (1962) some algorithms, such as those that replicate a word in

sequence, cannot act upon any word in a given vocabulary but require additional and distinguished signs which take part in the process (although these signs are unchanged in the final result). Consequently algorithms of this kind act upon an augmented vocabulary, and Dom ( U ) increases. (Since the augmenting signs must be necessarily distinct, mere addition of a signs would not be sufficient.) There is a reasonable and tempting correspondence between linguistic evolution involving change in the domain (and/or range) of 8 and the augmentation of a vocabulary from t(P) to t (P u Xp) and z( Ya) to z( YQ u Yo). In this case it is perhaps better to view the appearance of algorithms of this form as a consequence of the reproductive process which we have argued is the mechanical backbone of learning.

Comments on system representation Let us relate the concepts of 3.4. to the formal representation of Rosen (1958) and,

in particular, deal with the special case considered in detail throughout 5.1.-5.6. and

(1) Suppose a state description of a physical system, 8, in the scientific language, 2, yields a set of identified states z E Z. These are partitioned into subsets ZZ (equivalence

6.1.-6.5.

Retereiises p. 212-214

Page 215: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

206 G O R D O N P A S K

classes Zip induced by the partition p ) which correspond to subsystems, Ei E E, i = l , 2 , . . . . . . . . . n.

( 2 ) From Rosen (1958) a system is determined by a directed graph in which the directed edges correspond to mappings and the nodes of the graph to the conjunction of the domain, Dom ( F ) , of some mapping, F, with the product set formed from the factors of subsets in the range, Ran ( F ) , of other mappings, all Dom ( F ) and Ran ( F ) in E being defined.

Thus Ei E Fi; Dom (Fi) --f Ran (Fi). If 3 is the physical embodiment of a com- munication system, Ei E Ft; Dom (Fi) -+ Zi.

We specify, also, that Zi is a factor in Dom (Fi), hence:

e E F,; [(Zj (xi . . . . . . . . z,*+ m) cs zt 1 -f 2, * *

where Z j , Z m , are subsets of Zj, Zm, j = 1,2, . . . . . . . n, not i, and j + m = j . . . . . 11,

not i. (3) For adaptive subsystems (all subsystems of interest are adaptive) we define a

parameter, pi, pmax 3 Q.I 3 1, such that pi is an index off,t in a set Fi = ( t i t ) ,

when Ei =&i; [(Z,* @ . . . . . . . . . . Zj*+ ,J 0 Zt ] --f Zi and Ei is adaptive if, given a stationary sequence, [XI C rC, (a redundant condition since all Dom(F) and Ran(F) i n E are defined and are connected with similarly specified subsystems) a convergent sequence [qi] of values of q ~ i induces a convergent sequence [ z ] C Zi so that, after an interval of adaptation, [ z ] , is in a subset Z,* C Zi.

(4) Adaptation is directed. The convergent sequence of values of pi is selected to maximise some variable which we shall identify for the moment with a payoff available to Zi (later we shall give a very ditferent interpretation to this variable).

Formally, let Gi be a function with domain the product set, ZS @ [zj @ . . . . Z J + ~ ]

where n 3 m + 1 which assigns a numerical value to each state in Zi @ [zj 0 . . . zj+m ] (each element of this set designates the state of Ei, immediately defined as some element of Zi, in relation to the states of m + 1 other subsystems). Let Bi denote the average value of Gi over an arbitrary sequence of changes in state.

Adaptation is defined as any process which selects those values of pi whichmaximise 0,. Since we are not concerned with mechanism so much as a model of mechanism, it is unnecessary to comment upon how this selection is achieved. We take it as a characteristic of the physical medium in which the system is embodied (I) that se- lection of ‘pi (or whatever corresponds to our representation of an adaptive parameter) does occur and (11) that Og maximising selections lead to convergent sequences [pi ] of the values of yli (hence to a n adaptive change).

Finally, we comment that the selective process does entail the idea of ‘chance’, although only in the weak sense of an ‘independent’ process which decides between alternative values of pi in the absence of sufficient evidence. Consequently, ‘chance’ is subsumed by the characteristics of the physical medium.

(5) Now E embodies communication systems which will be separated from Z in a moment ; that is, we shall consider certain information structures needed to maintain the physical stability of E apart from the underlying physically constrained organisa- tion. But a realistic separation rests upon the fact that, in terms of Or maximisation,

Page 216: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 207

which is the basis of our stability criteria, communication is worth while, otherwise it cannot, physically, and consistently occur, excepting by accident. We thus assert that Gi is SO defined that certain elements of the set Zi 0 (zj @ . . . . . . . ~ j + ? ~ ) which c m only be consistently, that is, non accidentally, achieved by correlated action on the part of several subsystems, EJ Em, in conjunction with Ei, have higher 0 values for Ef than those which can be achieved independently. Gi induces cooperation which implies interaction and may entail communication.

(6) To derive a communication system from its embodiment in E, we perform an ahstractioii whereby certain subsets of states are identified with signs. Now we are concerned with those physical systems, 2, which, on abstraction, yield at least a pair of distinct communication systems, A and B, characterised by operational languages ( x and p. (In the model of 5.1.-5.6. and 6.1.-6.5. the abstractions define the spatial pattern and the frequency attributes of physical events.)

Formalise abstraction by a many to one mapping J a or Jo when Ja(Zi) = Yp, J 4 Z i ) = Y f . Neizce a communication subsystem E: E A is: zi = f t ; X: 0 Yn ->

Y p : , fEi E F: = Ja(Fi), and where XI" = Yp @ . . . . . . . . . . Y," (similar definitions apply in the case of 3: E B). Finally, as a convention, we include one indifferent, or ineffective, value in each Yn or each Yf .

-n

The more familiar probabilistic image of a communication subsystem is : ,a n n a n " i = Pi xi @ Yi -t Yi

where Pp is a matrix of empirically estimated joint probabilities of utterances, x, y , say p(x,y) and the probabilities p(f,i) = [p(yi = I), p(pi = 2), . . . . . p(yi = y,aX) I may be inferred from these empirical estimates.

(7) By previous definition communication in A is possible if and only if the directed graph of A , namely of P ( E ) has cyclic subgraphs; or in B if and only if the directed graph of B, namelyJo(E), has cyclic subgraphs. An alternative condition is that E:"p communicates if and only if:

*n n Y:; Xi*" C Yy; Xj*" C . . . . . . . . . . Yra, where either (I) Y = i, or (11) X , c Yi

(8) The a vocabulary is X a u Ya; X a = u i(A'r), Y O = u $( Y : ) ; the B vocabulary is X @ u YO; X@ = u k(X:), Y@ = u k( Y:). Thepossible utterances in a are product pairs x, y , ~ Xa @ Ya and in p product pairs, x, y , ~ X f l @ Yo but those which are achievable depend upon the functions, f ,$ and f ,$ which define the communication subsystems. Any utterance determines a many to one relation between subsystems G j , sm, a.nd some particular subsystem Sf E A , the character of which depends upon the form of system graph and the distribution of the values of the pi (for the domain of a sub- system is a product set, its range is a factor in a product set). A similar argument applies to s E B.

-a -a

-0

Most utterances have only a local connotation. (9) Legitimate utterances x, y , = u E Ua and x, y , = u E Uo (where Ua and Up are

sets of legitimate a and p utterances) 'mean the same thing to' all subsystems E ; E A (or all E B).

To define 'meaning the same thing' we must refer to the physical embodiment of Rrfcrcnccs p . 212-214

Page 217: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

208 G O R D O N P A S K

the communication system; for perhaps the least non trivial definition, that x ‘means the same thing to’ 3; as it does to 3; is that, in the same conditions, receipt of x

induces the ‘same state’ in 3; that it does in 3;. (10) Let zo be a state in Z and let E be some relation of equivalence in Z. Thus if

zo E z , then ZO, z , are equivalent states and the equivalence class of zo with reference to E is denoted as Z O / E . Choice of a particular relation E , though crucial in our argument, is not determined in the formal representation; except in the trivial sense that the conceptual framework must not be contradicted by the chosen relation. In particular, if zo E z , are regarded as the ‘same states’, signs that ‘mean the same thing’ must be able to exist.

(1 I ) Consider a pair xo E X a , XI E Xa, with xo E Dom(f:,) C X p and similarly with x1 E Dom(fzc) C X : . Let xo n x, imply that xo ‘means the same thing as’ x, if and only if for each 3: E A and 3; E A and for the same zo E Z it is true that:

(I) when (y E YtU) = ( y E Y;) = $, and (11) when pi = cpj = YO, TO a value of p,& (XO,?,) E zolE n Zi and thatf,j (XI, y ) E ZO/E n Zt.

(12) We apply the plausible constraint that Ja maps ZO/E n Zi onto a single member ya of Yf (which may be the indifferent element), similarly that Y : maps ZO/E n Z k

onto a single member yb of Y:. (13) In this case xo/n is a sign with the common meaning of the set Z O / E . Any

xa E xo/n, X b E xo/q may be combined withya (if Xa E X a also) to yield a legitimate utterance Xa, Ya = U a E Ua and, if xb E XP, to yield a legitimate utterance xb, yb =

= U b E UP. By ‘combined’ we intend for xa ‘presented’ (given 9 selected by E: and a suitable value of yi) to 3: (or, in the case of the Xb, the same comment with reference to E: and yk, rather than c, and pi).

(14) Sets of signs are informative if there exist equivalence classes like zo/E, zl /E. . . . and values of 9.1 like y 0 , y 1 , . . . . . . . Such that if yi = yo, thenfz,(xo, >,) E zo/E n Zi implying yo E Y : and if yt = 91, then instead f zt (~0,;) E z l /E n 21 implying y l E Y:; since, in this case, the selection of y E YH indicates the adaptation of Sp.

(15) We are now in a position to specify the composition of the sets of legitimate utterances in the operational languages a and j3, namely the sets of product pairs Ua and UP which are defined as being disjoint, U a n Up = 0.

( 1 ”

-a

Let X/x be the set of equivalence classes x&. Let Z/E be the set of equivalence classes z ~ / E .

u‘= u u {xr/n n [u (Dorn(F~)) l )O~, ) xrln zrlE B p E A in in X/n ZIE

Similarly, for Up, if 5: E A is replaced by 3: E B and if Dom(F,“) is replaced by Dom(Fi).

(16) Coherent discourse in a or j 3 (incidentally, also informative discourse in a or p), consists of sequences of legitimate utterances (this is not, of course, the only discourse in a or j 3 ; for some values of yi, Eta will, given XO, produce y such that XO, y is not in Ua and similarly for yk, = E:, and signs not in U@. But insofar as a and f i are opera-

Page 218: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 209

tional languages a coherent universe of discourse must exist and communication within it is characterised by sequences [u, ] C Ua and [up ] C Up.

+

-+ x2 + >Z --f must be yr E Ja(Z,/E). For since the communication is coherent any selection of 2, from YH (or, for 5’:, from Y:) must correspond to some yr E Ja(Zr/E) [or in the case of one of the B subsystems to yr E Jp(Zr/E)]. But X y and X,b are product sets with factors Y: and Y:; but the only values of these factors which are achieved in coherent discourse are signs or components of signs y^ = yr, as above.

E Y:, for some r . Further, if in the directed graph, subsystems E’,a and 3; give an input to subsystem 3;; for x C Xf, - x = Ja(z, . . . . . . . z b ) , where the state Z a E Za/E n 2, and state z b E Zb/E n 2,. Thus, using similar arguments for B, we arrive at the structure indicated in Fig. 9 below.

The physical subsystems Ei with behaviours that generate the discourse, are adaptive in the sense of (3) and (4): the values of pi converge to a definite value yo. Regional evolution entails the stable ‘cooperative’ coupling of these subsystems (in the sense that the measure of Dom f n Domfq, increases as a result of adaptation) and it is denoted as ‘3i, Ej + 8i u q .

Notice that the utterances of Eta are no longer distinct, in 2, from the utterances of sj and become ‘joint utterances’. In order to effect separation it would be necessary to discover and attach the name ‘i’ or ‘j’ as a parameter of each utterance.

Condition (15) and condition (7) imply that in order to preserve the relation 3, = Ja(E’t) between &and E8a as well as Ua n Up = 0, it is necessary to choose 9 and E so that no cyclic subgraph of the system that includes Et such that P(&) =

Bia E A also includes Ek such that Jp(Ek) = 8,” E B. Hence s”, u Ek is a prohibited coupling.

( 1 7) The factors of the x in a coherent sequence of discourse [ua ] = XI +

(18) Hence y = Ja(Z,. /En Zt), E Yp, for some r ; = Jp(Z,/E n Zk),

pi - ,

- a

- a

Physical embodiment of+ any time inducing particular sample of discourse

Fig. 9. References p . 212-214

Page 219: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

210 G O R D O N PASK

(19) There are thus several ways of discussing the system. The most comprehensive refers to the physical embodiment. Here the relations ,o and E are preserved and the abstractions J a and Jo have a definite form; in the case considered, ‘frequency’ and spatial pattern attributes of physical events. Next there are the communication systems derived by J a and JD which are related through the equivalence classes induced by t‘. Finally, there is the discussion of discourse characterised by a relation z induced by E ;

and, speaking in these terms, it may be impossible to preserve the identification be- tween communication systems E7u, 3:, and the originally chosen physical subsystems, .=.I, C k .

(20) To preserve the integrity of a and ,8 we need a condition like Ua n Uo = 0. On the other hand, the possibility of translation depends upon a p and @a interaction which is formally represented by a many to one mapping l2.1 (with parameter i such that if X i C Xfi then possibIy l2.1 ( X i ) C Dom(Fr), and vice versa for a to p). But i n order to preserve the a,8 distinction if Q,(X;) C Dom(FP) then it is riot true that Q.~(Xa)nx for any x E X a . Hence the subsystems we have considered cannot translate coherent discourse.

(21) Translation, in fact, entails the construction of a different logical type of statement; hence from the viewpoint of an observer an uncertainty regarding type (some care is needed in the interpretation of this remark): for manifestly different types of statement are involved in a discussion of the structure in (19). But translation entails the development of higher orders of function. Hence an observer’s propositions about the translation are propositions of difierent and higher type. In particular, his linguistic interpretation of the system leads him to discard his definition of subsystems as 0 localised entities).

(22) A formal representation of what occurs when a subsystem (at least of the kind we have considered) translates a coherent sequence of discourse leads us to reinterpret our idea of a subsystem or equivalently of the function it computes (the trick involved is that the basic distinction between the subsystems 3.1 rests upon consideration of the entire structure in Fig. 9 and dio is chosen to satisfy this distinction). So far we have been concerned with utterances Ua, and Ufi, determined by subsystems s 1 , E and E.”,”, E’f, that are characterised by Order (I) functions, F:, FY and F!, F f . Now consider the construction of Order (IT) (or of higher order) functions determining ,utterances’ in Ua @ Ua, UP @ Ufi, or since this is not prohibited, Ua @ U@.

- -

-a -a

Denote: (I) x n in un E u“, X b in U b E u@, (11) h = (xa, .ua) in h, ua) E U a 0 Ua

E U“ o U@ s b = ( X b , X b ) in ( U b , U b ) E ufl cx) up tc = (xa, &b) in (ua,

If 7t can be interpreted at Order (11) (the assumption that it can is the ‘similarity’ condition of 3.4.) then none of these relations are prohibited : (I) xu z E a , x b z z b , (which are not translations) or (2) 6, z E , , 6 b 7t tC, hence E , 7c 6 b when from ( I ) , xa z xt, which implies the possibility of translation. Now to say ‘xa z x b ’ does contradict the original constraints, imposed upon the system if we interpret this as a proposition of Type I (referring to the Order

Page 220: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 21 1

( I ) functions) but it does not contradict these constraints if it is interpreted as a Type I1 proposition (in other words if ‘mean the same thing as’ is now conceived as a relation involving Order (11) functions and the subsystems, whatever these may be, that they characterise). Alternatively, we can define a new relation z* such that ‘ X a z* x b ’ is a Type (I) proposition that is true if and only if the Type (11) proposition ‘ X a z xb ’ is true.

(23) From (1 S), regional evolution implies Ei, Ej + E’i u Ej. It can be argued that Ei u Ej has a behaviour (and that the corresponding, coupled, communication system generates a discourse) that is determined by a function of higher order than the Order 1 functions fqZ and fqt that are members of its domain (equivalently the coupling introduces further axioms into the system since the graph is redefined). Linguistic evolution, entailing the ‘translation’ or ‘indentification’ of signs, should involve couplings like Ei u Ek, but these are prohibited by (IS). However, the coupling ‘ [(Zi u Ej) u (Ek u EL)]’, which is interpreted as a stable, cooperative, association between the coupled systems Ei u Ej and Ek u EL is not prohibited. Notice, however, that as in (18) the utterances of Etu are inseparable from those of S, or those of -5,” froin Ee. Hence, if a mapping, 52, of utterances is described in the original descriptive framework, in A, it must carry the names ‘ i ’ , ‘j’, as a parameter.

(24) Consider, in particular, the special case when a coherent sequence of a dis- course pa describes a subsystem (since subsystems are identified with functions, it is sufficient that pa exhibits the complete behaviour of a subsystem). Then a translation of pa + ,UP is a reproduction of the subsystem concerned (notice, ( I ) that this is the case we chiefly examined in 5.1.-5.6. and 6.1.-6.5., and (2) the close relation between this case and Rosen’s paradox (Rosen, 1959). In order that an a subsystem reproduces it is necessary to invoke a distinct subsystem, here the ,8 image of the a subsystem, and in order to speak about the process to invoke an idea of logical type. In Rosen (1959) the logical type of statements about the system and its distinct environment is higher than the original type, of statements about the impossible self reproducing system. In the present case the physical correspondent of an a-system is reproduced in the ‘environment’ provided by the physical correspondent of a /3-system and vice versa. In terms of a- or /3-discourse this entails ‘translation’.

(25) Several physical embodiments may be possible; we have considered only a particular evolutionary system, the subsystems of which are characterised by a combi- nation of the Ei, represented, formally, as ‘Et u Bj’.

(26) Now any communicating group of the Et is trivially such a combination. How- ever, we adopt the reasonable convention of writing ‘& u Ej’ if and only if the correlation between the states of Ed and certain other states of E ’ j is I , is such that the interaction between the designated subsystems is not reasonably construed as communication; for no doubt is resolved as a result of it.

(27) To achieve Et u Ej the adaptation rules are modified as indicated in 3.4. and later illustrated in 5.1.-5.6. and 6.1.-6.5.

(I) Adaptation is locally irreversible (that is with respect to Q and the distinction Zi and Zj).

(11) Or = 0 (Ei) is an index of survival; we assert that the physical medium that

,a

Rcfrrenrrs p. 212-214

Page 221: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

212 G O R D O N P A S K

embodies Et must be maintained at a cost that is measured in terms of 0(Q. Hence the 0(Et) maximising criterion is a criterion of survival, say 8(&) > 80, for survival.

(111) The Gt are so defined that 8 is superadditive thus: - 8(& 8 Ej) > 0 ( 5 ) + 8(Ej) (IV) To represent survival, the directed graph of .Ti is defined if and only if the

physical embodiment of Et survives, namely, if and only if 8(&) 3 00.

(28) Finally a replacement rule is needed to avoid the depletion and, ultimately, the disappearance of subsystems. In some pertinent sense the domain and the range of potentially definable sets of functions are created to maintain an average population of n subsystems which differentiate and since 0 is superadditive also aggregate by adaptation.

S U M M A R Y

We have considered a cybernetic model in which learning and cognition appear as components of an evolutionary process in a self organizing system. The model il- luminates the role of ‘distributed’ or ‘non-localized’ functions and it possesses an interesting asymmetry which seems to account for some of the peculiar discontinuities in behaviour associated with ‘attention’ and ‘insight’.

In order to discuss the activity in a self organizing system it is necessary to adopt a descriptive framework applicable in the case when we (acting as observers or ex- perimenters) suffer a structural uncertainty, which we do (though this is often for- gotten) in any discussion of intellect. To illustrate the point, we cannot define the representation of a message, like an engineer, with complete certainty. But this does not mean that the model is unconstructable. Indeed, a functioning physical realization of the model exists. Until a few months ago the most likely correlate in the brain for the evolutionary process in this cybernetic model seemed to be a changing mode of activity in resonant neurone circuits. Whilst this interpretation is still plausible, it leads us to look for further physical mechanisms with which the resonant circuits interact. Some recent work from other countries indicates a number of physiological mechanisms acting at a macromolecular level, that may embody the required inter- actions.

REFERENCES

ANGYAN, A. J., (1958); An analogue model to demonstrate some aspects of neural adaptation.

ASHBY, W. ROSS, (1956); Design for an intelligence amplifier. Automata Studies. C. E. SHANNON AND

ASHBY, W. ROSS, (1958) ; The mechanism of habituation. Mechanisation of Thought Processes. London,

BARBIZET, J., ET ALBARDE, P., (1961); Memoires humaines et memoires artificielles. Concours M t d . ,

BEER, S. , (1961); Cybernetics and Managemenf. London, English Universities Press. BEER, S., (1962); Towards a cybernetic factory. Principles of Se(f Organizing Sys tem. H . M . Von

BEURLE, R. L., (1954); Activity in a block of cells capable of regenerating impulses. R. R . E. Memo-

Mechanisafion of Thought Processes. London, Her Majesty’s Stationery Office (p. 933).

J. MCCARTHY, Editors. Princeton, N. J. , Princeton University Press.

Her Majesty’s Stationery Office.

6.

Foerster and G . W. Zopf, Editors. Proceedings Urbana Symposium. Pergamon Press.

randa, 1042.

Page 222: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

L E A R N I N G B E H A V I O U R 21 3

BOK, S. T., (1961); Recognising, learning and interpretation in the vacuole memory theory. Pro-

BONNER, J. T., ( I 958); The Evolution of Development. Cambridge, University Press. BRAITHWAITE, R. B., (1962); Is there a Natural Ordering amongst Theoretical Concepts? Presidential

address to the British Society for the Philosophy of Science, to be published. BROADBENT, D. E., (1 958) ; Perception and Communication. London, Pergamon Press. BROWN, J., (1958); Information, redundancy and decay of the memory trace. Mechanisafion of Thought

BUSH, R. R., AND MOSTELLER, F., (1955); Stochastic Models for Learning. New York, Wiley. CAIANIELLO, E., (1961); Outline of a theory of thought processes and thinking machines. Journal of

CAMERON, S. H., (1960); An estimate of the complexity requisite in a universal decision network.

CARNAP, R., (1947); Meaning and Necessity. A Study in Semantics and Model Logic. Chicago. ESTES, W. K., (1950); Towards a statistical theory of learning. Psychological Review, 57, 94-107. EYSENCK, H. J., (1956); Reminiscence, drive and personality theory. Journal of Abnormal and Social

GEORGE, F., (1961); The Brain as a Computer. London, Pergamon Press. GREENE, P. H., (1961); In search of the fundamental units of perception. Technical Report AF 49

GUNTHER, G., (1962); Cybernetic ontology. Proceedings ONR, Armour Research Conference on Serf

HARMON, L. D., (1959); The artificial neurone. Science, 129, 962. HARMON, L. D., (1961); Studies with artificial neurones. I. Properties and functions of an artificial

HEBB, D. O., (1949); The Organisation of Behaviour. New York, Wiley. HULL, C. L., (1952); A Behaviour System. New Haven, Conn., Yale University Press. HYDEN, H., (1960); Paper in: The Cell, Vol. 4. J . BRACHET AND A. E. MIRSKY, Editors. New York,

IVAVHEVKO, A., (1961, 1962); Abtomathka, and personal communications. LETTVIN, J. Y., MAITURANA, H. R., AND MCCULLOCH, W. S., (1959); What the frog’s eye tells the

frog’s brain. Proceedings Institute of Radio Engineers, 47, 1940-1951. MACKAY, D. M., (1951); Mindlike behaviour in artifacts. British Journal for the Philosophy of Science,

MACKAY, D. M., (1956); The epistemological problem for automata. Automata Studies. C. E. SHAN-

MACKAY, D. M., (1958); OpeIational aspects of intelligence. Mechanisation of Thought Processes.

MASTERMAN, M., (1958); In various reports, Cambridge Language Research Unit, Cambridge. MCCULLOCH, W. S., (1960); The reliability of biological systems. Self Organising Systems. M. c. MCCULLOCH, W. S., AND MACKAY, D. M., (1952); Limiting information capacity of a neuronal link.

MCCULLOCH, W. S., PUTTS, W. R., (1947); How we know Universals, The Perception of Auditory

MINSKY, M. L., AND SELFRIDGE, 0. G., (1960); Learning in random nets. Information Theory. c. NAGEL, E., (1949); The meaning of reduction in the natural sciences. Science and Civilisation. R. c. NAPALKOV, A., (1961); Systematics in the working of the brain and some problems in cybernetics.

NAPALKOV, A., BRAINES, S. N., AND SVECHINSKJ, V. B., (1959); Problems in Neurocybernetics. MOs-

PAPERT, D., (1960); Redundancy and linear logical nets. W.A.D.D. Bionics Symposium. Washington,

PASK, G., (1958a); Physical analogues to the growth of a concept. Mechanisation of Thought Processes.

PASK, G., (1958b); The growth process inside the cybernetic machine. Proceedings 2nd International

ceedings 3rd International Congress of Cybernetics, Namur. Paris, Gauthier-Villars.

Processes. London, Her Majesty’s Stationery Office.

Theoretical Biology, 2,204-235.

W.A.D.D. Bionics Symposium. Washington, A.S.T.I.A.

Psychology, 53, 328-333.

(638)-414. Chicago, United States Air Force.

Organising Systems. Jovitts, Jacobi and Godstein, Editors. Chicago, Spartan Books.

neurone. Kybernetik, 1, 89-101.

Academic Press.

2,105-121.

NON AND J. MCCARTHY, Editors. Princeton, N.J., Princeton University Press (pp. 235-2511.

London, Her Majesty’s Stationery Office.

YOVITS AND S . CAMERON, Editors. London, Pergamon Press.

Bulletin Mathematical Biophysics, 14.

and visual forms, Bulletin Mathematical Biophysics, 9.

CHERRY, Editor. London, Butterworth (pp. 335-347).

STAUFFER, Editor. University of Wisconsin Press.

Proceedings 3rd Infernational Congress of Cybernetics, Namur. Paris, Gauthier-Villars.

cow, Academy of Sciences.

A.S.T.I.A.

London, Her Majesty’s Stationery Office.

Congress of Cybernetics, Namur. Paris, Gauthier-Villars.

Page 223: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

214 G O R D O N P A S K

PASK, G., (1959); A proposed evolutionary model. Proceedings Urbana Symposium on Principles of

PASK, G., (1 961a); The cybernetics of evolutionary processes and self organking systems. Proceedings

PASK, G., (I961 b); An Approach to Cybernetics. London, Hutchinsons. PASK, G., (1962); Comments on an indeterminacy that characterises a self organising system. Cyher-

netics arld Neural Processes. V. BRAITENBERG AND E. CAIANIELLO, Editors. New York, Academic Press.

PASK, G., AND LEWIS, B. N., (1962); Annual Report on Burroughs Research Corporation Perception Project. System Research Ltd., to be published.

PAVLOV, 1. P., (1927); Conditional Reflexes. London, Oxford University Press. POLONSKY, I., (1961); La cellule vue comme un systeme cybernetique nature1 A la lumitre des recents

progris <I I’electronique moleculaire. Proceedings 3rd International Congress of Cybernetics, Namur. Paris, Gauthier-Villars.

Self Orgatising Systems. London, Pergamon Press.

3rd Intetxational Congress of Cybernetics, Namur. Paris, Gauthier-Villars.

PRINGLE, J . W. S . , (1951); On the parallel between learning and evolution. Behaviour, 3, 174. RASHEVSKY, N., (1960); Mathematical Biophysics. New York, Dover. RASHEVSKY, N., (1 962); Mathematical foundations of general biology. Annals New York Academy

ROSEN, R., (1958); Representation of biological systems from the standpoint of the theory of cate-

ROSEN, R., (1959); A logical paradox implicit in the notion of a self reproducing automaton. Bulletin

ROSENBLATT, F., ( I 961); Perceptrons and the theory of brain mechanisms. Cornell Aeronautical

SHERRINGTON, C. S., (1948); The Integrative Action of the Nervous System. New Haven, Conn.,

SIEGAL, S., (1959); Theoretical models of choice and strategy behaviour; stable state behaviour in

SKINNER, B. F., (1938); The Behaviour of Organisms: An Experimental Analysis. New York, Appleton

UTTLEY, A. M., (1956); Conditional probability machines. Automata Studies. C. E. SHANNON AND

VON FOERSTER, H. M., (1960); On self organising systems and their environment. Self Organising

VON FOERSTER, H . M., personal communications. VON FOERSTER, H. M., A N D PASK, G., (1960/61); A predictive model for a self organking system.

WALTER, W. GREY, (1953); The Living Brain. London, Duckworth. WIENER, N., (1961); Cybernetics. Cambridge, Mass., Massachusetts Institute of Technology Press;

New York, John Wiley. WIENER, N., (1962); Comments made a t the 2nd International Congress of Cybernetic Medicine,

Amsterdam, 1962, and at the International Spring School of Physics (Cybernetics and Neural Pro- cesses), Naples, 1962.

of Sciences, 96, 4.

gories. Bulletin of Mathematical Biophysics, 20.

of Mathematical Biophysics, 21.

Laboratory Report No. VG-1196-G-8.

Yale University Press.

the two-choice uncertain outcome situation. Psychometrika, 24, 303-31 6.

Cen tory.

J. M. MCCARTHY, Editors. Princeton, N. J., Princeton University Press.

Systems. M. C. Yovits and S . Cameron, Editors. London, Pergamon Press.

Cybernetica.

WILLIS, G. D., (1959); Plastic neurones as memory elements. Lockheed Report, LMSD, 48432.

Page 224: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

215

L’Obstacle de Non-Linkaritk en Neurologie

P. N A Y R A C

Clitiique Neurologique et Psychiairique de I’ UniversiiP de Lille, Lille (France)

I1 arrive souvent que, entretenant un interlocuteur de l’approche cybernetique des fonctions nerveuses, on s’entende objecter a priori: que le systkme nerveux ne peut se ramener a la machine (c’est l’objection d’un interlocuteur biologiste) ou que ses ripolises ne sont pas linkaires (c’est l’objection d’un mathematicien).

En fait, ces deux objections n’en forment qu’une, car le premier caractkre d’un dis- positif niecanique est justement la IinCaritC. Et c’est grlce a la linCaritC que la compa- raison avec la mCcanique peut &tre fructueuse dans 1’Ctude du systkme nerveux parce que les dispositifs 1inCaires peuvent Etre analysis par des mCthodes bien connues et d’un usage relativement a id .

On dit qu’un systkme est linCaire quand l’entre‘e (dans le cas particulier le stimulus) x est liee i la sortie y (dam le cas particulier la re‘ponse) par une fonction y = f (x) qui soit solution d’une Cquation diffkrentielle lineaire par rapport a y et ses dCrivCs les fonctions go, gl, . . . et h ttant des fonctions donnCes de x:

Y go ( x ) + Y’gi (x) + y”gz (x) + . . . = h (x)

De plus, quand on parle d’un systkme lintaire, on sous-entend gCnCralement que ce systkme ne se modifie pas dans le temps, que ses paramktres ne changent pas, c’est-A- dire que l’on a affaire B une Cquation diffkrentielle linCaire ci coeficients constants:

Aoy + Aiy’ + . . . = B

Ainsi se manifeste la propriCtC majeure des systkmes lintaires: permettre la rCunion des stimuli et la superposition des rCponses. Grlce A la constance des paramktres on peut faire sCparCment I’expCrience avec le stimulus a, puis avec le stimulus 6, puis avec (a + b).

Un tel systkme lintaire est alors un champ possible pour l’application de l’analyse frkquentielle : intkgrale de convolution, spectre de frCquence, transformation de La- place, fonction de transfert, spectre de puissance, autocorrClation.

Mais le systbme nerveux n’est certainement pas un systbme lintaire. Ses paramktres sont fonction de la situation, c’est-8-dire qu’il ne donnera pas toujours la m&me re- ponse au m&me stimulus. Et mEme, dans une situation donnCe constante, les types de rkaction peuvent varier du seul fait du dtroulement des Ctats fonctionnels. Les signaux faibles, m&me perps, sont souvent infra-liminaires par rapport A I’activitC intCgrCe de Bibliographic p . 223

Page 225: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

216 P. N A Y R A C

l’ensemble de l’organisme. Et les stimuli B grande puissance entrainent des rCponses qui peuvent Stre d’un autre ordre que celles que provoquent les stimuli moyennement intenses. Enfin, on a pu soutenir que l’echelle de temps des phknombnes nerveux serait quantifite. Et A 1’Cchelle du neurone, le tout-ou-rien semble Climiner toute lintaritt.

A priori, on est donc tent6 de rejeter de la neurologie les modbles lintaires, et de se tourner de prkftrence vers les prockdts non-lintaires d’analyse, proposCs en particulier par Wiener, Singleton, Platzer, Senders. Mais la linCaritt apporte tant de commoditts qu’il vaut la peine d’en chercher une application par quelque biais.

La notion de quasi-lintaritt, Ctudite par Licklider, est, a cet Cgard, prtcieuse. Un systbme quasi-linkuire est un systbme tel qu’il puisse Ctre dCcrit avec une suffisante

approximation par des Cquations difftrentielles lintaires dont les coefficients ne peuvent varier dans le temps qu’en respectant certaines restrictions. La variation ne peut s’en faire qu’en tchelon ou au contraire avec une trbs grande lenteur par rapport a la pCriode du stimulus. Ceci admis, le comportement du systbme comprend deux parties. Une partie IinCaire, c’est-A-dire qui satisfait a une tquation difftrentielle lintaire A coefficients approximativement constants, et une partie irrkductiblement non-lintaire. Pour qu’on puisse considtrer le systbme comme quasi-lineaire, il faut que la partie non-lintaire du comportement soit petite par rapport B la partie linCaire. L’ordre de grandeur de cette petitesse est dictC par ce fait que l’analyse Ccarte la partie non- lintaire en la considkrant comme altatoire, c’est-a-dire en l’incorporant au bruit. Tout dtpend donc de ce que l’on peut tolCrer de bruit dans le problbme qu’on ttudie, c’est-a-dire de la proportion

signal bruit

ou encore, de la repartition de la variance de la rtponse entre le signal et le bruit. Ces recherches permettent donc de considtrer des doniaines limitts dans lesquels le

systbme nerveux se rtvble quasi-linkaire. En celh, d’ailleurs, la neurologie adopte une attitude qui ne diffbre pas tellement de celle d’un ingtnieur en face d’un rtgulateur: aucun dispositif industriel n’est parfaitement lintaire, et on n’applique l’analyse li- ntaire que pour des signaux suffisamment intenses tout en restant nettement infkrieurs B la limite de surcharge.

A l’tchelle du neurone, la loi du tout-ou-rien qui rtgit les impulsions parait un obstacle infranchissable B l’analyse linCaire. Mais on ne saurait pour autant assimiler le neurone B un systbme du type flip-flop. L’impulsion n’est qu’un signal de sortie: elle est prCparCe par des phCnombnes continus qui peuvent Ctre considtrts, en premiere approximation du moins, comme quasi-lintaires.

Par exemple, sur une fibre nerveuse isolte, une excitation infra-liminaire entraine une dtpolarisation locale de la membrane, un prkpotentiel non critique. Ce prkpotentiel apparait et disparait lentement, et se propage avec dCcrCment aux rCgions immtdiate- ment voisines de la fibre. Sa grandeur est une fonction continue et croissante de celle du stimulus infra-liminaire.

Page 226: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

N O N - L I N ~ A R I T ~ EN N E U R O L O G I E 217

Dans le neurone, les phtnombnes de facilitation et d’inhibition qui rbglent l’excita- bilitt de la cellule sont continus et quasi-1inCaires : I’impulsion discontinue qui suit l’application d’un stimulus supra-liminaire n’est que le signal Cmis par un systbme pour exprimer un ttat d’excitabilitk rtglC jusque la par des phtnombnes quasi-lintaires.

Au niveau de la plaque motrice au repos, la terminaison nerveuse libbre en per- manence de l’acttylcholine sous forme de quanta multimolCculaires qui, sous la seule influence de l’agitation thermique, arrivent au contact de la membrane selon un processus altatoire. L’arrivCe des impulsions acctlbre la production des quanta d’act- tylcholine selon une loi lintaire, multipliant du mCme coup les chances d’impact. Quand les impacts atteignent un certain taux dans l’unitt de temps, le potentiel musculaire apparait, et obtit alors a la loi du tout-ou-rien. I1 y a donc 18 un phCnombne analogue ce qui se passe pour la fibre nerveuse: le phtnombne non-lintaire n’est que l’aboutissement d’un processus linCaire.

la loi du tout-ou-rien, et constitue donc en elle-mCme un phinombne non-lintaire. Mais les impulsions peuvent former des structures obCis- sant a des lois linCaires. Par exemple, dans les fibres sensitives, l’intensitt du stimulus module la frCquence des impulsions et Katz a montrt la lintaritt de la relation qui relie l’intensitk du stimulus ii la frkquence de la decharge du rkcepteur. Matthews avait mCme Ctabli une relation logarithmique entre la frtquence de la dCcharge d’une fibre isolte de provenance d’un fuseau musculaire de la grenouille et la tension appliqute au muscle; mais ce rtsultat est aujourd’hui discutC.

Autre exemple de la cooptration des impulsions en un phtnombne 1inCaire: dans la moelle, les systbmes rtflexes ou intercalaires bombardent les motoneurones d’im- pulsions qui sont toujours infra-liminaires. L’excitabilitC du neurone varie lintairement jusqu’au moment ou le seuil est atteint: la dtcharge synaptique se fait alors sur le principe du tout-ou-rien. On voit ainsi un phtnombne lintaire s’intercaler entre deux temps faits d’impulsions, c’est-&dire d’tltments non-lineaires.

11 n’est d’ailleurs pas sfir que le tout-ou-rien soit le fondement obligatoire de l’activitd nerveuse. Dans la boucle gamma, la rkgulation est lintaire, mCme en ne considtrant qu’une fibre fuso-motrice et une fibre fuso-sensitive. I1 est possible qu’il y ait la une infraction a la loi du tout-ou-rien. La chose est encore discutie.

L‘impulsion est soumise

Si l’on quitte l’tchelle du neurone isolt pour considtrer les ensembles de neurones, la statistique intervient, et peut faire apparaitre une loi lintaire ou quasi-linCaire en la dtgageant du comportement d‘un ensemble d’C1tments soumis au tout-ou-rien.

Par exemple, dans le stretch-reflex, la contraction de chaque unit6 motrice obtit au tout-ou-rien, mais pour l’ensemble du muscle il y a, grdce au recrutement, une relation lintaire entre la force qui Ctire le muscle et la contraction par laquelle celui-ci rCpond.

Cela est encore plus net dans l’tcorce ckrtbrale. Par exemple, en matiere de con- ditionnement, le rtflexe conditionnel une fois install6 est un tout-ou-rien, mais son ttablissement est prtfact par une progression lintaire de la spCcificitC du stimulus. Avant que le conditionnement ne soit obtenu, un stimulus legbrement diffkrent de celui qu’on destine A Ctre le stimulus conditionnel (et que par consequent on utilise habituellement dans les sCances de conditionnement) donne une rCponse quantitative- BiblioEraphic p . 223

Page 227: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

218 P. N A Y R A C

ment moindre que le futur stimulus conditionnel, et d’autant moindre que les deux stimuli sont plus differents. II y a une relation lintaire entre le stimulus (disons par exemple la frCquence d‘une sirkne) et la rCponse conditionnelle qu’il entraine, mais quand le conditionnement est ttabli, alors le stimulus est rigoureusement spkcifique et c’est le tout-ou-rien.

On connait aussi la formule logarithmique que Fechner a Ctablie pour lier I’intensitk d‘un stimulus a la sensation qu’il engend. Mais comme la formule de Fechner peut Ctre discutte en thCorie, et qu’elle se justifie par ses applications pratiques, il est bon de s’ktendre un peu sur ce point.

LA FORMULE DE FECHNER

Un stimulus Ctant donnt (un Cclairage, par exemple, ou un son) chacun sait, sans l’avoir appris, qu’on ne perCoit une variation de ce stimulus que si elle n’est pas trop faible par rapport a ce stimulus. Soit une chambre CclairCe par un lustre comportant six ampoules. L’une d’entre elles vient B s’kteindre. Un sujet prCsent dans la pikce s’aperqoit de cet incident sans avoir A regarder le lustre. Si au contraire le lustre porte cent cinquante ampoules, la dkfaillance de l’une d’entre elles passe complktement inaperpe.

Ces faits de bons sens ont CtC Ctudits expkrimentalement et ont regu une expression quantitative.

Soit I l’intensite d’un stimulus (une intensite lumineuse par exemple). On appelle seuil dife‘rentiel la quantitt minimale dont il faut faire varier I pour obtenir un nouveau stimulus tout juste discernable du premier. Soit U le seuil differentiel. L’expkrience montre que U est, en premikre approximation, proportionnel B I:

U = KI

K Ctant une constante de l’ordre de 1/100 pour la sensation visuelle, de 1/5 pour la sensation auditive.

Fechner a prCsentC cette loi sous une autre forme. I1 formule l’hypothkse suivante: la diffkrence des sensations engendrCes par deux intensitCs tout juste discernables du stimulus, est la m&me quelle que soit I’intensitC. En d’autres termes, en faisant varier une intensite de I1 a (I1 + KI1) on obtient la mCme variation de sensation qu’en faisant varier I’intensitC de 12 B (12 + KI2). Dans les deux cas, quand YintensitC a varie de la quantitC KI la sensation a variC d’une mCme quantitd, qu’on appelle un e‘cheloiz di ffe‘rentiel.

En fait, il vaut mieux retirer B ce qu’on appelle ‘l’hypothbse de Fechner’ son carac- tkre d’hypothkse. Si la conception de Fechner est une hypothbse, elle est denuCe de signification, puisqu’elle n’est pas ‘opCrationnelle’, qu’elle ne peut conduire B une verification exPCrimentale: la sensation, fait de conscience, ne peut Ctre formulee en termes de matikre-Cnergie, ne peut Ctre mesurable, ne peut figurer comme grandeur dans une equation. Ce qui est mesurable, c’est l’intensitt du stimulus (par exemple par le carre de l’amplitude si le stimulus est vibratoire).

I1 vaut mieux considkrer 1“hypothbse de Fechner’ comme une dkjinition, par laquelle

Page 228: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

N O N - L I N ~ ~ A R I T ~ E N N E U R O L O G I E 219

Fechner dtfinit une grandeur S qu’il appelle ‘sensation’, mais qui n’est pas la sensation de notre conscience.

cette intensitt un accroissement dl. Soit d S l’accroissement correspondant a la ’sensation’ S. Pour mesurer cet accroissement dS, on peut utiliser comme unitt l’echelon difftrentiel, puisque par definition l’tchelon difftrentiel (quantite dont varie S quand I varie de KZ) est le m2me quel que soit I. Une simple rkgle de trois montre que si S varie d’un Cchelon quand I varie de KZ, S

varie de - tchelons quand I varie de dZ. Cela s’kcrit :

Soit un stimulus d’intensitt I . Donnons

d l

KI

dS 1

d l KI - - _ -

Ce qui s’intkgre immtdiatement : 1 I

s = - loge - K l o

10 &ant le seuil absolu, intensite maximale ne donnant lieu a aucune sensation. Cette equation est appelte ‘loi de Fechner’. I1 vaut mieux l’appeler ‘formule de

Fechner’, car elle ne fait que prtsenter sous une autre forme la loi de Bouguer-Weber, une fois dtfinie la grandeur S dont il faut bien se rappeler qu’elle ne reprtsente pas la sensation telle qu’elle est classiquement dtfinie en termes de conscience.

Dire ‘la sensation est tgale au logarithme de l’excitation’ est dtpourvu de significa- tion optrationnelle.

Dire ‘la grandeur S dtfinie par Fechner se mesure par le logarithme de l’intensitt du stimulus’ est une formule correcte.

A vrai dire, on peut faire encore une objection: c’est que la constance de K pour les difftrentes intensites n’est vraie qu’en premikre approximation. Plus exactement, K varie avec I , c’est une fonction f ( I ) . Si bien que la formule de Fechner devrait s’ecrire

l o J

ce qui est beaucoup moins parlant. Malgrt ces critiques de principe, la formule de Fechner garde une valeur certaine. 11 n’est pas douteux, chaque fois que l’on a a considtrer les incidences psychiques

d’un agent physique, que l’on n’ait interzt B considerer, plut6t que l’intensitt du sti- mulus, le logarithme de cette intensite. CelB ressort, en particulier, du frequent usage qui est fait de la notion de bel.

Pour apprtcier par exemple une intensitt sonore, on peut utiliser le carrt de l’ampli- tude de la vibration, conformtment a la dtfinition physique. Mais comme la loi de Bouguer-Weber, dans l’expression que lui donne la formule de Fechner, montre la suptrioritt de la fonction logarithmique, on prendra pour caracttriser une intensit6 sonore, quand on considtrera le son dans ses rapports avec la sensation et la vie psychique une unitt appelte bel. Le be1 se dtfinit comme le logarithme du rapport de Bibliographie p . 223

Page 229: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

220 P. N A Y R A C

deux puissances sonores (c’est-&-dire des carrCs de deux amplitudes) : A2 A

4 Ao log -- ou encore 2 log -,

A &ant l’amplitude du son et A0 un niveau de reference prealablement dCfini, par exemple le seuil absolu. On utilise plus souvent un sous-multiple du bel, le decibel, qui vaut un dixikme du bel.

On voit qu’un rtsultat de psychologie thiorique a pu mener a une formulation qui est aujourd’hui d’usage courant dans diverses branches de l’industrie, dans les tCIe- communications en particulier.

On peut donc considerer que la formule de Fechner est justifite par ses applications, et que par consequent le retentissement de la stimulation d’un rkcepteur piriphtrique sur le comportement est un phtnomkne IinCaire.

L E S T E S T S D E P I S T A G E

11 est enfin un domaine oh la notion de quasi-linearit6 semble ouvrir de larges possi- bilites d’analyse, c’est celui des tests de pistage.

Un test de pistage consiste essentiellement a demander au sujet de suivre, avec un index qu’il diplace le long d’un rail, un but dont la position varie, sur un axe parallble au rail, selon des lois diverses.

Dans un tel test, l’homme se comporte comme un regulateur. Le systkme rCgule est l’ensemble de l’index et du but. La grandeur d’entrte est la vitesse du diplacement de l’index, la grandeur de sortie est la distance qui sCpare l’index du but (sa grandeur de consigne est zero). L’homme est le rkgulateur; pour lui, la grandeur d’entrte est la distance qui stpare I’index du but, et la grandeur de sortie est la vitesse qu’il imprime a I’index.

C’est 18 une situation artificielle, realiske au laboratoire. Mais elle est trks proche de ce qui se passe en beaucoup d’actions humaines, tracer un sillon, couper une ttoffe, sans parler de la manoeuvre des commandes dans I’industrie moderne. Le fait qu’on ait pu approcher cette action de pistage par les modkles quasi-linkaires (bien que cette approche soit encore trks imparfaite) ouvre a I’analyse un type de geste humain d’une grande gCntralitt.

Cette approche peut se faire de deux manikres. D’abord on peut donner au but un deplacement instantank, le but Ctant immobile

avant et aprks ce dkplacement. C’est ce qu’on appelle un Pchelon. Le sujet y rCagit par un dkplacement de l’index, qu’on appelle la rtponse-echelon.

On peut figurer cela en portant sur un graphique en abscisse le temps et en ordonnee les deplacements (Fig. 1). Dans ce cas, on utilise la forme d’onde.

11 est aisC de verifier qu’en gtniral il y a un temps perdu pendant lequel I’index ne se dkplace pas. Puis l’index rejoint le but en traCant une sorte d’exponentielle. L’en- semble n’est pas IinCaire, parce qu’il consiste en la succession de deux temps differents quant a leur mecanisme: d’abord le cheminement des impulsions a travers le systkme nerveux jusqu’a ce que la reaction motrice soit amorcte, ensuite la continuation, la coordination, la regulation de cette reaction.

Page 230: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

N O N - L I N B A R I T B E N N E U R O L O G I E 22 1

La seconde partie se prCte assez bien h l’analyse, mCme sur simple examen de la forme d’onde. Mais cette analyse a CtC particulibrement pousste dans le domaine de temps par Philipps. Hick a mCme analyst le temps de reaction en montrant qu’il

ddplacernents

Fig. I. Les deplacements de I’index et du but dans un test de pistage humain.

dipend de la variCtC des rCponses Cqui-probables, c’est-8-dire des stratkgies ouvertes au sujet en expkrience pour repondre au dkplacement du but. Mayne a imaginC un modkle du pistage humain sur la forme d’un ensemble comprenant navigateur et pilote automatique. Le pilote automatique fonctionne en boucle fermCe et est approximative- ment lintaire. Mais le navigateur fixe parfois un nouveau cap, ce qui change les caractkristiques de la boucle: l’ensemble du systbme est donc discontinu, comme la rCponse-Cchelon dans le pistage humain.

L’AN A L Y s E F R B Q u E N T I E L L E

Une autre approche quasi-linCaire peut &tre tirCe de l’analyse fre‘quentielle. Le principe en est la dkomposition de la forme d’onde en une infinite de sinusoldes.

Chacune est une oscillation CICmentaire, qui peut s’exprimer par la grandeur imaginaire bien connue:

&ot = cos wt + i sin wt

Chaque oscillation Clkmentaire est affectCe d’un coefficient imaginaire. On appelle spectre la fonction imaginaire qui Ctablit une relation entre les coefficients imaginaires et les frtquences correspondantes de I’infinitC des sinusoi‘des en lesquelles on dCcompose la forme d’onde.

La forme d’onde exprime la rCponse dans le domaine de temps. Pour l’exprimer dans le domaine de fre‘quence on utilise la transformation de Fourier ou celle de Laplace.

On appellefonction de transfert la transformation de Laplace d’une rCponse-impul- sion.

Tous ces modes d’analyse exigent des calculs et des tracts compliquCs, et sont beaucoup facilitCs par I’usage des ordinatrices, mais on peut les simplifier en faisant se dCplacer le but non plus par un Cchelon, mais bien par la rtsultante de plusieurs sinusoides superposkes. On sait ainsi que l’analyse frkquentielle n’aura B prendre en considkration qu’un nombre fini de frkquences, ce qui facilite beaucoup les choses.

Un exemple d’analyse frequentielle mbne h proposer comme modble le servomkcca-

Bibiiographie p . 223

Page 231: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

222 P. N A Y R A C

nisme conditionnel d’Elkind. Ce modkle fonctionne comme une boucle ouverte tant que la sortie est adaptee i 1”intention’ qui preside au systkme, mais tout k a r t entre la sortie et I’intention entraine I’apparition d’un feedback nCgatif.

Ce modkle n’est pas sans parent6 avec le contrde par feedback des paramktres, par conservation ou elimination des adaptations rtaliskes par essais et erreurs. Rendre ces adaptations les plus efficaces possible, et les fixer une fois ainsi optimalisCes, c’est un des aspects les plus importants du comportement humain. On sait le dkveloppement qu’Ashby a donnt A ce mode d’approche.

Ainsi, bien que le systkme nerveux ne soit pas IinCaire, il est possible d’en commencer l’analyse en premikre approximation en l’attaquant par des points o i ~ il peut ttre consider6 comme lintaire. Cette possibilitk est encore bien loin d’avoir ett exploitte A fond; il sera fructueux de poursuivre dans cette voie. Mais bien sQr, un moment viendra oh l’on devra poursuivre par des mCthodes moins aisCes de l’analyse non- IinCaire.

R E S U M E

Malheureusement, l’ttude par assimilation aux servomCcanismes et aux dispositifs autorCgulCs, se heurte B l’obstacle de non-linCaritC.

En effet, la thkorie mathimatique de ces micanismes suppose qu’il s’agit d’CICments de boucle obCissant aux principes de progressivitt et de superposition des causes et des effets tlimentaires, c’est-&dire que les relations entre grandeurs d’entrie et grandeurs de sortie doivent satisfaire B des equations diffirentielles non seulement linkaires, mais encqre B coefficients constants dans le temps (boucle IinCaire).

De toute tvidence, un tel modkle ne peut s’appliquer sans pricautions au systkme nerveux. A 1’Cchelle du neurone, il existe un seuil d’excitation qui s’oppose a la super- position des causes et effets. A 1’Cchelle de I’individu, le comportement complexe des vertCbrCs supkrieurs tire justenient sa complexit6 de ce qu’il ne satisfait pas aux exigences de IinCaritC.

Mais on peut rkaliser des expkriences oh, dans certains intervalles, le comportement humain est, au laboratoire, d’une linearit6 trks suffisante pour qu’on puisse en dCgager la notion de quasi-IinCaritC. C’est en particulier le cas si les paramktres varient de loin en loin brusquement par Cchelon, ou au contraire si ces paramktres varient assez lentement pour pouvoir &tre considtrts comme constants dans un intervalle de temps limit&, les variations petites Ctant incorporkes au bruit de fond. D’ailleurs, mtme dans l’industrie, les dispositifs ne sont pas parfaitement IinCaires, surtout au voisinage de leurs limites d’utilisation.

Comme l’analyse non-IinCaire ne nous offre encore que des outils de maniement plus difficile, il semble actuellement possible d’utiliser l’analyse IinCairc, mais dans des situations bien dtlimitCes.

S U M M A R Y

T H E P R O B L E M O F N O N - L I N E A R I T Y I N N E U R O L O G Y

It is difficult to establish mechanical patterns of living behaviour, for the first charac-

Page 232: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

N O N - L I N ~ A R I T ~ E N N E U R O L O G I E 223

teristic of a mechanical apparatus is linearity. The nervous system, especially, is certainly not linear. About that, however, the idea of quasi-linearity such as Licklider considers it, is valuable. A quasi-linear system is one which can be described with sufficient approximation through linear differential equations, the coefficients of which do not vary in time except if they respect certain restrictions. The non-linear part is considered as aleatory, that is to say incorporated in noise.

The principle ‘all or nothing’, which controls the nervous fibre excitation, is really but a signal born out of a quasi-linear mechanism, a non-critical prepotential. It is to be compared to what concerns muscular contraction.

As regards sensation, one knows how Fechner’s formula (the merit of which has been pointed out by the industrial applications of the idea of decibel) results from the integration of a particularly simple linear equation.

I t is more difficult to apply the notion of quasi-linearity to tracking tests. The matter is still under discussion, but it has already been fruitfully approached.

Life is clearly defined as non-linear. It will, therefore, be necessary to apply non- linear methods of analysis. Still, in first approach, valuable results can be obtained by a linear approach.

B I B L I O G R A P H I E

ASYBY, W. R., (1961); Designfor a Brain. London, Chapman and Hall. HICK, W. E., (1951); Man as an element in a control system. Research, 4, 112-1 18. LICKLIDER, J. C. R., (1 960); Quasi linear operator models. Developments in Mathematic Psychiatry,

MAYNE, R., (1951); Some engineering aspects ofthe nixharlism of body control. EIectricalEngineering,

NAYRAC, P., ( I 952); Ele‘menfs de fsychologie. Paris, Flammarion. PHILIPPS, R. S., (I 947); Theory of Servomechanisms. New York, McGraw-Hill. WIENER, N., (1949); lnfornral Lectures on Nonlitzear Net Work Analysis. New York, Wiley.

R. D. Luce, Editor. Chicago, Free Press of Glencoe.

70, 207-212.

Page 233: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

224

Adaptive Machines in Psychiatry

J . H . C L A R K

Burden Neurological Institute, Bristol (Great Britain)

TEACHING MACHINES AS C O N T R O L MECHANISMS

A . Control systems

Survival is the purpose of adaptive behaviour which keeps essential variables, such as blood pressure and body temperature in man within limits (Ashby, 1960). Ashby’s symbol of a dial with limits and a pointer is used (Fig. 1).

Calibration (control without feedback) only works on an unchanging environment (Fig. 2). Feedback (Fig. 3) from the environment is like a series of questions (Q), which

CALIBRATION a ENVIRONMENT 7 ENVIRONMENT ‘i’

Fig. 1 . Essential variable. Fig. 2. Calibration.

the controller answers (A), according to rules (Q, A) of behaviour. Correct answers (CA) keep the essential variables within their limits.

Negative feedback (Ashby, 1956) is used in automatic controllers (Pask, 1961) such as automatic pilots, thermostats, James Watt’s governor (Ashby, 1960) and Edmund Lee’s fantail on windmills (Wailes, 1954). Automatic controllers need environments which vary predictably.

Ultrastable controllers (Ashby, 1960) find a correct answer (CA) to an unpredicted question (Q) by trying new rules, unrelated to each other, at random, until a correct rule (Q, CA) is found (Fig. 4). This is then retained, as long as it remains correct. The rule generator (corresponding roughly to Ashby’s set of step-functions) acts when the essential variables go outside one of their limits.

Page 234: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A D A P T I V E M A C H I N E S I N P S Y C H I A T R Y 225

Ashby’s own description (1960) should be consulted since I have ventured to deviate from it.

I ENVIRONMENT

Q Fig. 3 . Feedback.

RULE GENERATOR 7 7 E N VlRONMENT +

Fig. 4. Ultrastable controller.

B. Learning systems

Recurrent situations (Ashby, 1960) are environments which repeat the same questions (Q). Accumulation of adaptation is the economical storing, by a controller, of correct rules (Q, CA), which as the environment changes may continue to be correct (Fig. 5).

Fig. 5. Accumulation of adaptation.

Learning is the process of finding, storing and using again, correct rules (Q, CA). Psychology and neurophysiology study learning as a central problem. References p . 235

Page 235: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

226 J . H. C L A R K

Rule store is a symbol used here to indicate the possibility of learning. Successful rules, that is, correct rules (Q, CA) are stored here. Then for each question (Q) the rule store can be searched for an appropriate correct rule. The ensuing answer (A) may still be correct (CA), depending on any changes in the environment.

A collection of correct rules (Q, CA) in a rule store, gained from an environment, forms a ‘model’ of that environment.

The pupil is defined as an ultrastable controller with a rule store, learning the environment in order to control it and an error-score (S) as the essential variable of a learning system.

A serial structure is shown by many environments: their separate parts linked like a chain (Fig. 6).

P U P I L HQ

Fig. 6. Serial adaptation.

r PUPIL IJ, Y

I N S T R U C T O R

Fig. 7. Teaching.

Page 236: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A D A P T I V E M A C H I N E S I N P S Y C H I A T R Y 227

A Serial adaptation (Ashby, 1960) is a method of dealing with such an environment. Its parts are controlled in their serial order. Thus in Fig. 6 : part 1, then 2, and 3, etc., is the order for serial adaptation.

C. Teaching systems

The teacher is defined as that part of a pupil’s environment which helps him to learn (Fig. 7). Co-operation is shown by their sharing the essential variable, the error-score (S) by the pupil and the teacher.

The task is defined as that part of the pupil’s environment ‘to be learnt’. The error-score (S) refers to this process of learning the task. Most tasks are serial tasks, having a serial structure, as symbolised in Fig. 7.

The teacher acts then as an instructor and a comparator.

D. Teaching-messages

From Fig. 7 we can list the messages which pass between the pupil, the teacher and the task:

(I) The instructor sends correct rules (Q, CA) to the pupil; (2) The task puts the questions (Q) to the pupil; (3) The pupil gives then the answers (A) to the comparator; (4 ) The task sends correct answers (CA) to the comparator; (5 ) The comparator sends the error-score ( S ) to the pupil; (6) The comparator sends the error-score (S) to the instructor; (7) The comparator sends the nature (N) of the error to the instructor, and (8) The instructor modifies (M) the serial structure of the task.

E. Teaching and learning

In this phase reciprocity occurs: both the teacher and the pupil learn. A symmetry is provided by what is learnt. Thus the pupil learns the ‘task‘ but also how the teacher teaches it and the teacher learns the ‘task’ (but he does so before he teaches it to the pupil), and also how the pupil learns it.

We have to consider that distinctions between the task, the pupil and the teacher may not be obvious. An observer may be forced to make arbitrary definitions (Pask, 1961).

F. Teaching machines

Teaching machines consist of a piece of apparatus (Lumsdaine and Glaser, 1960) and their function is to handle one or more of the eight types of messages numbered above.

These simpler teaching machines contain teaching tools : (a) Bayeux tapestry. The serial task is to handle the message 1 : to give correct

rules (Q, CA), that is, data, assertions, ‘information’ to the pupil. References p . 235

Page 237: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

228 J . H. C L A R K

(b) Gramophone record of heart-sounds, normal and abnormal. Serial task as in (a). (c ) Modern text-books, for example, that of Kemeny, Snell and Thompson (1957)

may be used. Serial task is message 1 : correct rules (Q, CA), that is, data, and hints or ‘partial information’ (Pask, 1961); messages 2,4 (pupil acts as his own comparator).

Teaching machines proper provide message 8. They include :

(a) The typical Skinner’s machines (Skinner, 1960). The serial task is to handle the messages 1-6 and 8.

(b) Crowder’s scrambled text-books such as: The Arithmetic of Computers (1962). The serial task is to handle the messages 1-8. Not all these messages are obvious, but they are implicit in the way the scrambled text-book is put together. Messages 7 and 8 are embodied in alternate routes through the task.

(c ) Pask’s automatic trainers for neuro-muscular skills, such as SAKI which teaches punched card operating (1961). The serial task is to handle the messages 1-8.

Boxes, which house most teaching machines, are obviously seen as ‘machines’. However, most teaching functions can be performed by Crowder’s scrambled text- books.

Time, however, is important in neuro-muscular skill training, both in questions (Q), answers (A) and correct answers (CA). The error-score (S) may be in the form of multiple, cumulative response-time scores (Pask, 1961). Such machines need boxes for their buttons, clocks, and multiple storage devices.

G. Teaching machine programmes

Assuming the box, book or other apparatus, a programme comprises: a task, a serial structure of the task, teaching rules of the comparator, and teaching rules of the instructor.

Programming is the process of finding a good programme, that is, one that teaches the task well. All programmes are found by trial and error, with large groups of pupils. Their errors are scored and the nature of their errors examined. Then the programme is modified and tried again; and so on.

A programming is thus an ultrastable process with a low error-score (S) from nearly all pupils as the essential variables. Improvements are retained, so it is a learning process also.

Skinner’s machines and Crowder’s scrambled text-books embody these final, ac- ceptable programmes, based on the average performance of many past pupils.

Pask’s automatic trainers, such as SAKI have programmes which are not so fixed and which are sensitive to the individual pupil. They embody multiple teaching rules which modify the task and give partial information according to multiple stored responses (A) of the pupil.

Skinner’s programmes always aim at reducing the pupil’s error. Pask’s programmes in the short-run sometimes aim to increase it, as when the task is made more challen- ging to maintain the pupil’s interest. Thus there is alternate competition and co- operation which makes the interaction very ‘lively’.

Page 238: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A D A P T I V E M A C H I N E S I N P S Y C H I A T R Y 229

T E A C H I N G S I T U A T I O N S I N P S Y C H I A T R Y

A . Doctors and patients

Psychiatrists (and other doctors) control the patient to bring his symptoms, which are the essential variables, back within limits (Fig. 8).

PSYCHIATRIST

Fig. 8. Psychiatry.

Doctor and patient, however, alternate in the roles of pupil and teacher, and Fig. 7 should be consulted again.

Treatment is the presentation, by the doctor, of a task. The patient, now the pupil, adapts himself to an environment which alters with respect to drugs, hospitals, nurses, operations and so on. Successful treatment produces adaptive changes in the patient which reduce his disorder and the symptoms of it. Diagnosis is the classification by the doctor of his own, internal ‘model’ of the patient and his disorder. Successful treatment is based on correct classification.

When he takes the patient’s history and examines the patient, the doctor is now the pupil, learning about the patient (making a ‘model’ of the patient and his disorder).

Special investigations extend the examination by instruments as in radiography, or by standardised techniques as in tests of intelligence.

B. Criterion

Doctors and patients send messages to each other by words and facial expressions, by surgical operations, by drugs, physiotherapy and so on. Human beings have requisite variety (Ashby, 1956) to deal with such varied and subtle messages.

Teaching machines, if they are to replace human doctors in any situation, must be able to deal with the variety of messages (type 3) which the patient sends.

This criterion limits the choice of situations in psychiatry which are amenable to present-day teaching machines.

The multiple choice of possible answers (A) is an example of an automatically limited variety of messages frequently used in such machines (Lumsdaine and Glaser, 1960).

T E A C H I N G M A C H I N E S I N P S Y C H I A T R Y

A. Education

Mentally backward patients should benefit from education by teaching machines, which could easily embody most of their special needs when learning, such as: References p . 235

Page 239: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

230 J. H. C L A R K

(a) Serial task, (b) Score presented, (c) Repeated help with answers, ( d ) Over-learning of correct answers, and (e ) Correctness rather than speed at first. See the account of the specialised teaching needs of backward pupils by Clarke and

Clarke ( I 958). Verbal and non-verbal tasks could both be programmed. See the account of a

non-verbal task for pre-school children, by Hively (1960). Teaching psychiatry to students might be done by Skinner's machines. During the

programming the acid-test of the pupil's comprehension, indicated by error-scores, might lead to an account of, in particular, the neuroses which would appear nihilistic. However, the effect of such a pruning of the theory of the neuroses might be to increase the present very poor agreement between psychiatrists as to their diagnosis (see Kreitman, Sainsbury et al., 1961).

B. Therapy

( 1 ) Hypnotic induction In a previous paper (Clark, 1961) hypnotic induction was analysed with a view to

its mechanisation and it was found that the questions (Q) to the subject are physical actions and words :

(a) Physical actions used by hypnotists are eye fixation, touching, contact passes, non-contact passes, and limb lifting.

(b) Topics raised by hypnotists are eyes, body, sleep, vision, thought, breathing, and hearing.

(c) A grammar was used; the sentences of hypnotists follow the sequence: questions, public assertions, commands, and private assertions.

( d ) Answers (A) from the subject were non-verbal. Electromyography and breathing rhythm were discussed as suitable feedback channels which a teaching machine could handle, using appropriate comparators.

(e ) The constraint of sensory receptors, and the information-content and strength of stimuli (Q) were also discussed.

My conclusions were that a teaching machine combining features of those of Skinner and Pask could deal with the questions (Q) and answers (A) needed to induce hypnosis.

Physical actions could be imitated by simple devices. Words, topics, grammar could be put onto tapes arranged as a task with a flexible

serial order. Hypnotic induction was seen as a teaching situation with the subject learning the task of entering the hypnotic state.

(2 ) Behaviour therapy Eysenck (1960) has gathered accounts of this method of treatment by Wolpe and

Page 240: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A D A P T I V E M A C H I N E S I N P S Y C H I A T R Y 23 1

many other authors. The main idea is to replace unadaptive behaviour, as seen typically in phobias, by more useful patterns.

Wolpe’s principal method is to present a serial task or hierarchy of noxious stimuli to the patient. Thus a patient with a phobia of cats might be shown the word CAT on a card, then a picture of a cat, and so on. The early stimuli are only distantly related to real cats, and so arouse little anxiety. De-sensitisation occurs, and the patient gradually learns to accept these questions (Q) without giving answers (A) indicating (Fig. 7) too much anxiety (S). Pleasant emotions may be induced simultaneously in order to reduce anxiety by reciprocal inhibition.

Aversion therapy, used typically for alcoholism and fetishism, is the exact opposite. The object is now to increase the anxiety attached to alcohol or a fetish-object. This is done by giving unpleasant stimuli when they are presented.

Feedback of answers (A) is usually verbal or visual. The doctor watches the patient. Skin resistance changes have been used in the programming, stage in the preparation

Voluntary control of the noxious stimulus is sometimes allowed to the patient. A snag which sometimes occurs is that the patient moves through the hierarchy. the

of the hierarchy with the patient’s help.

serial task too fast and generates excessive anxiety. This sets the treatment back.

Fig. 9. Normal subject playing the Horses-Game.

References p. 235

Page 241: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

232 J. H. C L A R K

My suggestion here is that these methods might be refined by using more physiologi- cal feedback, more voluntary control by the patient of the serial order, intensity and duration of stimuli, and that the programme, at present Skinnerian in nature, could be enriched by some Paskian flexibility.

(3) Occupational therapy Barton (1959) has stressed the importance of this. It is needed by psychiatric

patients, including mentally retarded ones. Occupational therapy includes many ac- tivities from learning a trade down to simple games.

In the autumn of 1958 I asked Mr. Gordon Pask if he would design a game-like teaching machine. He agreed and, helped by Mr. Brian D. Corbett, an electronic engineer, I built the Horses-Game. It was demonstrated on 13th March 1959 and used in a pilot experiment during April and May of that year (see below).

Control-box, display and a normal subject playing the Horses-Game are shown in Fig. 9. Fig. 10 shows the control-box and display diagrammatically, while Fig. 11 shows the main parts of the machine.

I 0 0 e o

I . 0 0 0 e o

I

Fig. 10. Display and control-box of Horses-Game. For explanation see text.

The game consists of adding extra ‘hay’ to a field by pressing a button on the control-box. This encourages two ‘horses’, shown as black and white circles (Fig. 10) to move to the ‘spot’ on the field with this extra food. The crossed circle at each

Page 242: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A D A P T I V E M A C H I N E S I N P S Y C H I A T R Y 233

spot is a lamp whose varying intensity shows the amount of food: the 'grass' plus any hay deposited at that spot. The horses move along the arrowed paths, their movements being shown by their appropriate light coming on from spot to spot along the path. The black and white circles represent these lights. (A third small light at each 'spot', shown in Fig. 9, was redundant and was omitted from Fig. 10). Although the horses' lights are shown black and white in Fig. 10, they are actually red and green as labelled in Fig. 11.

eating r a t e FOOD

control :;:a: box 1 a h e a d 12 GREEN

HORSE .

PATIENT I l l 4

Fig. 11. Plan of Horses-Game.

The horses have identical but independent rules of behaviour. They go towards the spot with more food from each of the four choice-spots (Fig. 10). A short-term memory biases them in their choice of path. They tend to make the choice, 'left' or 'right', which has recently been more successful. The teaching machine 'T' teaches the patient, who in turn learns to 'train' the two horses. The score (S) shown on the dial (Figs. 9-1 1) is produced by a coincidence circuit, for the object of the game is to keep the horses together.

Typical Pask teaching rules alter the game as follows. When the score increases, the horses eat faster and the 'hay' available to the patient gets less, and vice versa. The grass in the field grows steadily, apart from any 'hay' deposited.

Clues, perhaps lights coming on inside the control-box buttons, are a projected Rqferiwces p. 235

Page 243: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

234 J. H. C L A R K

refinement. Others are more complex paths, more horses, and a variable rate of grass-growth.

Pilot experiment. Twenty-five psychiatric patients of varied diagnosis and 25 normal subjects played the Horses-Game. The object was to find if the game was acceptable as a form of occupational therapy.

Results. All the patients played, mostly for over half an hour. Nearly all seemed to enjoy it; many expressed this warmly. For example, one patient said: ‘. . . . more than fascinating. There is something in these here lights that keeps you at it.’ Two thirds estimated their time of play as less than their actual time.

My conclusion was that such games, which could be considerably developed, were suitable for psychiatric patients.

C . Special investigations

( I ) Intelligence tests Multiple choice tests such as Raven’s progressive matrices could readily be pro-

grammed into a typical Skinner (teaching machine) apparatus. There is a serial task : message 2 (Q). Being only a test, no correct rules (Q, CA) are given to the subject. Nevertheless, the serial structure of the task does teach; this becomes clear to the subject if he neglects the serial order. (Such neglect is possible with the existing presentation of the test in a book).

(2) Personality tests

(MMPI) would be similarly amenable. Here the task is not serial: message 2 (Q). Multiple choice tests such as the Minnesota Multiphasic Personality Inventory

(3) Speculation: a new kind of psychological information During the pilot experiment with the Horses-Game certain patterns of play were

noted. Recording the internal states of the teaching machine might have confirmed them. Such data might be diagnostic.

Two schizophrenic patients with clinical thought-block showed a play which may have echoed this. Tt was very erratic with a score going from very good to very poor, very abruptly.

Several depressed patients showed an initial anxiety but they overcame this, helped by the teaching machine, and went on to score well.

Information from advanced teaching machines might perhaps combine the sensitivity of a diagnostic interview with the objectivity of a standardised test.

This idea should be distinguished from the manipulation of ordinary diagnostic data by computers (see, for example, an article by Paycha, 1960).

A C K N O W L E D G E M E N T S

Dr. David Stafford-Clark, Director of the York Clinic, Guy’s Hospital, kindly allowed me to conduct the pilot experiment with the Horses-Game there. I am grateful to

Page 244: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A D A P T I V E M A C H I N E S IN P S Y C H I A T R Y 235

Mr. Richard L. Gregory for drawing my attention to the fantail on windmills. Miss N. D. Dickinson and her staff kindly typed the paper for me.

S U M M A R Y

I n this paper the application of adaptive machines and in particular teaching machines to psychiatry is discussed. One such machine, the Horses-Game has been built and tested. Learning and teaching are defined in terms of the analysis of control mecha- nisms put forward by Ashby (1960). Teaching machines are described with reference to an analysis of the messages which pass between the pupil, the teacher and the task. Psychiatry is described as a series of teaching situations. A criterion for the possibility of mechanisation of these situations is put forward: the requisite variety (Ashby, 1956) of the teaching machine for dealing with messages from the pupil.

Situations in psychiatry which fulfil this criterion are then listed, such as education, therapy (hypnotic induction, behaviour and occupational therapy) and special in- vestigations (intelligence and personality tests), with a discussion of their appropriate teaching machines.

R E F E R E N C E S

A~HBY, MI. R., (1956); An Introduction to Cybernetics. London, Chapman and Hall. ASHBY, W. R., (1960); Design for a Brain. London, Chapman and Hall. BARTON, R., (1959); Institutional Neurosis. Bristol, John Wright and Sons. CLARK, J. H., (1961); The induction of hypnosis. Proc. Intern. Congr. Intern. Ass. Cybernetics, 3rd.

CLARKE, A. M., AND CLARKE, A. D. B., (1958); Mental Deficiency: the Changing Outlook. London,

CROWDER, N. A., (1962); The Arithmetic of Computers. London, English Universities Press. EYSENCK, 13. J., (1960); Behaviour Therapy and the Neuroses. Oxford, Pergamon Press. HIVELY, W., (1960); An exploratory investigation of an apparatus for studying and teaching visual

discrimination, using pre-school children. A. A. Lumsdaine and R. Glaser, Editors. Teaching Machines and Programmed Learning. Washington, National Education Association of the United States.

KEMENY, J. G., SNELL, J. L., AND THOMPSON, G . L., (1957); Introduction to Finite Mathematics. New York, Prentice-Hall.

KREITMAN, N., SAINSBURY, P., MORRISSEY, J., TOWERS, J., AND SCRIVENER, J., (1961); The reliability of psychiatric assessment: An Analysis. J . ment. Sci., 107, 887-908.

LUMSDAINE, A. A., AND GLASER, R.,(1960); Teaching MachinesandProgrammedLearning. Washington, National Education Association of the United States.

PASK, G., (1961); An Approach to Cybernetics. London, Hutchinson. PAYCHA, F., (1 960) ; Logique et technique du diagnostic medical. Medical Electronics. London, Iliffe

and Sons. SKINNER, B. F., (1960); Teaching machines. A. A. Lumsdaine and R. Glaser, Editors. Teaching

Machines and Programmed Learning. Washington, National Education Association of the United States.

Namur. In the press.

Methuen. (Ch. XIII).

WAILES, R., (1954); The English Windmill. London, Routledge and Kegan Paul.

Page 245: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

236

The Essential Instability of Systems with Threshold, and some Possible Applications to Psychiatry

W. R. ASHBY, H. VON FOERSTER A N D C. C. WALKER

University of Illinois, Urbana, Ill. (U.S.A.)

I N T R O D U C T I O N

For half a century, the widespread occurrence of threshold in the nervous system, and the importance of threshold in the details of neuronic activity, have been well known. There is less known, however, about how threshold would show in the large - in the behavior of the organism as a whole. Two studies (Beurle, 1956; Farley and Clark, 1961) have been made of the behavior of waves of activity traveling through a nerve net. Both studies have shown that such a net would have difficulty in maintaining a steady activity, for the wave of activity tends either to die out completely or to increase to saturation. Far from being tractable and steady, from the standpoint of biological usefulness such a network displays an essential instability. Not only does it tend rapidly to the extremes of inactivity or activity, but, once there, it can be moved away from the extreme only with difficulty.

This finding deserves emphasis because it is quite contrary to the plausible idea that threshold stabilizes a network. It also suggests that the actual brain must incorporate some mechanism that actively opposes the instability.

The studies cited are complex and do not allow the instability and the threshold to be related directly and simply. Here, we shall show that an extremely general and simple model still allows the relation to be displayed clearly. It also allows us to see more readily what is essential.

I. ASSUMPTIONS A N D D E F I N I T I O N S

Consider reacting units in great numbers joined to form a network so large that we do not have to consider edge effects. Each reacting unit is affected by pulses (of some physical activity) that come to it, and it reacts by emitting (or not) pulses of the same physical activity. At any moment, these pulses will be in existence at various points in the net; by considering the actual number of pulses as a fraction of the possible number, we can speak of the ‘density’ of pulses over the net. We can also speak of the probability that a particular point has a pulse, provided we are careful to dis- tinguish the two sample spaces: (1) various places at one moment, and (2) various

Page 246: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N S T A B I L I T Y OF SYSTEMS 23 7

moments at one place. In this paper we shall be especially interested in how the density changes with time.

To be precise, let us assume that all the reacting units are identical, that each reacting unit has n input channels, and that it emits a pulse if and only if at least 8 of its inputs receive a pulse. We will assume here that time is advancing in discrete steps, and that the system's next state depends determinately on its state at the pre- ceding instant of time. For this reason we assume that all pulses arrive at all inputs simultaneously. After A t , the consequent pulses occur at the outputs, and thereby become the next set of pulses at the inputs.

The reacting unit's inputs are assumed to receive pulses independently, in that the occurrence of a pulse on one input is not to affect the probability of occurrence of a pulse on another input. The assumption implies that the net has no short loops, either by branches re-converging in feed-forward, or by short loops of feed-back.

Change of density with time Given that the network is occupied by pulses to a density d, we can now find what

will be its density d' at one step A t later. Since we are assuming that density and probability may be used interchangeably,

the probability di that precisely i inputs on a particular reacting unit receive pulses is given by

dt = (I) df (I - d)n-i.

Consequently, the probability that at least 8 inputs are active, i.e., that the unit fires, is given by the cumulative binomial probability function (Ordnance Corps, 1952) :

It is convenient to notice that d' is also given by

Stability of the density These equations, by giving the next value (d') of d, as a function of its preceding

value (and of the parameters n and 8) give, in effect, a difference equation: as d gives d', so can d' give d", d" give d"', and so on. In this way we can find what happens to the density as the time is continued indefinitely. When solved, d must be considered as a function of time, now introduced in the place of the mere difference At.

The question that concerns us here is: will the value of d tend to stay near its initial value, or will it tend to run away to an extreme? To answer this question, two distinct questions must be considered. First, do equilibrial densities exist? Do densities exist which, once established, thereafter produce the same densities, densities for which d' = d? (Such equilibria1 densities will be denoted by d*.) Secondly, how do the densities change in the neighborhood of these d*'s? If a perturbation from d* is follow- References p . 241

Page 247: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

238 w. R. A S H B Y et al.

ed by a return to d*, the equilibrium is ‘stable’. Mathematically, a stable equilibrium exists at d* when

bd’

a LI < 1 , ___ (4)

the derivative being given its value at d*. The manner in which d’ depends on d can be seen by examination of the curve that

shows their functional relation; variations in the parameters n and 0 will generate a family of suchcurves. They are best discussed by the separation off of the extreme cases.

Consider first the class in which 8 has those values other than its possible extreme values of 1 and n.

By differentiating equation (3) both once and twice with respect to d it can easily be seen that d’ (d) has zero slope at both d = 0 and d = 1, and exhibits a single in- flection at the point d = ( 8 - l ) / (n - I). Since d’ = 1 at d = 1, and d’ (d) is con- tinuous in the integral, d’ (d) can cross the diagonal (d’ = d ) only once. At that point (one of the values of d*) its slope must exceed one, for it starts below the diagonal and finishes above it. Thus when 1 < 8 < n, the stability criterion (4) is not satisfied by any points between 0 and 1. The density is stable only at 0 and at 1.

At the extremes of 8, when 8 equals 1 or n, the network is degenerate and may be dismissed as uninteresting. Its behavior would not be essentially different from that of the other case.

0 .I .2 .3 .4 .5 .6 .7 .8 .9 1.0 Fig. 1. Density at the next step (8) as a function of the d previous density (d) , (threshold 5, 10 inputs).

Fig. 1 illustrates the situation in the simple case where n = 10 and 8 = 5. The equilibria d* are the three points where d’ (d) intersects the diagonal: at 0,0.42, and 1. Those at 0 and 1 are stable; that at 0.42 is unstable in the sense that the slightest derturbation from this value is followed by the density going to one of its extreme values. In Fig. 1 the changes in density from an initial value of 0.5 are indicated by the stair- way. Fig, 2 shows how various perturbations provoke a runaway to an extreme value.

HOW fast does the density diverge? Fig. 2 (with the time marked in arbitrary steps) shows how rapid is the divergence in the particular case when n = 10 and 8 = 5. Perturbations of more than one per cent result, in fewer than half a dozen steps, in the density being practically at 0 or 1. There is no reason to believe that our example is exceptional.

Page 248: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N S T A B I L I T Y O F S Y S T E M S 239

initial densities (threshold 5, 10 inputs). TIME

11. T H R E S H O L D V A R I A B I L I T Y

The main point of our paper is to demonstrate that certain types of systems with threshold tend to be markedly unstable. As it is not impossible that the brain uses such systems, yet is not markedly unstable, it is of interest to glance at the question whether some simple extra factor could eliminate the instability.

One obvious possibility would occur if the threshold were not fixed but variable. Any density d (other than exactly 0 or 1) can be made to give a subsequent density d’ (either more or less than d) by a suitable value of 6. So, if the threshold is varied quickly enough, the network’s density can be driven in any desired direction; thus, a feedback from density to threshold could result in a network capable of operating indefinitely without d getting stuck on 0 or 1. Such a feedback could easily be provided by evolution and natural selection; there must be many ways by which an increase in pulse-density could lead, by changed physico-chemical conditions, to a change in threshold value.

Discussion of specific details would be out of place in this paper. We would, how- ever, like to glance at the events that would occur if, once provided by natural selection, the feedback mechanism went wrong.

Ill. F E E D B A C K F A L L U R E A N D P S Y C H I A T R Y

Such a feedback would have to be provided by genetic control. What would happen in our network if the feedback were different - malformed or malfunctioning in some way? The question is not without psychiatric interest.

As a first point, we may notice that if the net is at all extensive, the corrective feedback would have to be applied all over it. As Fig. 2 showed, a very few steps uncorrected are sufficient to allow the density to go to an extreme value; and were the corrections fed back so that patches of neurons, or other sites of threshold, were more than a unit or two from a site of corrective action, then those patches would be un- stable and would go rapidly to extreme values. Suppose that, for whatever reason, some patches failed to receive the corrective action; what would happen?

If the system generally were adaptive: very little. The uncorrected patches will References p. 241

Page 249: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

240 w. R. A S H B Y et al.

almost at once go to extreme values, and then one of two events will occur. If the values stay at the extreme, the values will become part of the constant conditions to which the adaptive parts are adapting, and the adaptation will occur in the usual way, the adaptation being merely forced to follow a different route. If the extreme value is not permanent but conditional, the value will fluctuate with the conditions around it, and the density in the patch will simply alternate between two values (equivalent to 0 or 1). The density in the patch has now simply become a variable of the system, and again adaptation will be in no way prevented, only forced to follow another route. In both cases failure of the corrective feedback would entail only that the patch, instead of having rich internal possibilities of variations, will either lose them com- pletely, or will diminish them to the point of its becoming a mere binary. The per- formance of the whole system will tend simply to fall to that of a fractionally simpler system, the fractional fall being approximately proportional to the size of the patch.

So much for patchy failures. What about failures of the feedback generally? A point of interest is that every feedback, especially when its variation with time is continuous, readily develops oscillation - starts to ‘hunt’ - especially if there is an appreciable delay in the transmission of effects round the loop. Should such an oscillation occur, it would be the density that would show the rhythm. The changes would have to occur at a whole order of time slower than those that generated the density, for many pulses must occur, over much more than an average pulse-interval, before the density canibe sensed as a density. Thus the hunting-oscillations would be expected to occur with a much longer period than that of the pulse-event. There is clearly a possibility that the Berger rhythm might owe its origin to this cause; more we can hardly say at the moment.

If the feedback becomes generally inactive, the whole system would soon go either to complete inactivity, or to maximal activity; and from these states it could be moved only with difficulty.

If the density went to 0, and remained there, we would see a system that would remain inactive if undisturbed, that would respond only to a minimal degree if dis- turbed, and that would return to complete inactivity as soon as the disturbance ceased. Its behavior would be much like that of a patient in melancholia. On the other hand, if the system had gone to maximal excitation, and was kept there by its own instability, one would see a system active at all times, even when no disturbances come from the outside. It would be active because it was driven by its own activity, and would give an appearance much like that of a patient in acute mania. These two correspondences, between dynamic theory and clinical observation, are obviously of interest, but much more work would be necessary before we can take the correspondence further.

One last possibility remains to be considered. How might drugs and other bio- chemical factors affect the system with such an instability, corrected by a feedback- varied threshold? If the drug acts simply to cause a general and uniform shift in the threshold, any change it induces will be at once opposed by changes caused by the supposed correcting feedback; so the drug, in the presence of the correcting mecha- nism, would simply have a reduced effect. If however, the drug, instead of affecting the threshold, weakens the feedback itself, the system would become one in which

Page 250: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N S T A B I L I T Y O F S Y S T E M S 24 1

localized runaways could occur. The system would break, by its own dynamics, into a patchwork of high, low, and moderate densities. The behavior shown by such a system would probably be characterized chiefly by its bizarreness. Today a number of drugs are known - the so-called ataractics - characterized by their power of inducing behaviors of extremely mixed type. Again, whether the mechanism we have considered will be found to be relevant to these questions will have to be elicited by further study.

A C K N O W L E D G E M E N T

Work supported by Contract NSFG 17414 of the National Science Foundation.

S U M M A R Y

Any large system that uses threshold as the criterion for whether transmission is to occur at each node is fundamentally unstable in its density of active points. Any increase in density, by increasing the chance that other stimuli will be successfully transmitted, tends to cause yet further increases in density. Calculation of the exact effects confirms the expectation.

Were the threshold fixed, and were the conditions as assumed, any network using threshold would be erratic i n action, tending continually to run away either to complete inactivity or to maximal activity. Since many processes in the nervous system are not normally seen to exhibit such runaways, there must be some factors opposing the primary instability. One is described briefly, showing that stabilization of a thresholded network is not difficult.

The application of these facts to various symptoms seen in psychiatry is considered briefly.

REFERENCES

BEURLE, R. L., (1956); Properties of a mass of cells capable of regenerating pulses. Philosophical Transactions of the Royal Society of London, B, 240, 55-94.

FARLEY, B. G., AND CLARK, W. A., (1961); Activity in networks of neuron-like elements. Proceedings of the Fourth Symposium on Information Theory. C. Cherry, Editor. London, Butterworth, Ltd.

ORDNANCE CORPS, (1952); Tables of the Cumulative Binomial Probabilities. ORDP 20-11. Washington, D.C. Office of the Chief of Ordnance.

D I S C U S S I O N

WIENER: A system where you can easily get instability is a system of alternators. If you have switched in an alternator of a very different frequency or an appreciable different frequency or which has been wrongly phased, the system will blow up. On the other hand, if they are all locked in nothing will happen. How is this engineering problem of phasing a system which may exist over the whole country handled? In the old days human powers were used. A man watched two needles moving ahead and representing the rotation, run by synchronous motors, from different generators.

Page 251: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

242 D I S C U S S I O N

When the needles were nearly at the same speed, the apparatus was switched in. Today this is done automatically. An automatic mechanism switches the generator into the circuit only when the phase and frequency of the generator is correct. An unloaded generator wanders through the phases and when it comes to the right phase it is pulled in. If by some catastrophe one generator is overloaded a circuit breaker is automatically blown out and the generator is again freewheeling. In other words, in systems of the sort where instabilities are not only possible but very likely, the in- stabilities are limited by highly non-linear apparatus which will close or open a circuit at the right time. If you stay in a narrow range, there is also a tendency, and this is most important, for the different generators to pull one another into frequency. I n studying the nervous system it will be most useful to study analogies of this sort which exist in much smaller (operationally) similar complexes of machinery.

An other matter in which I am very much interested is the question of the one stage and the large scale change that you get by the action of the system. I am working on this in some statistical mechanical fields. I think it is possible to set up a statistical theory of the system to make the small changes of the system an infinitesimal trans- formation. Under certain circumstances this is what I am doing now with some statistical mechanical work. I hope to show that if this small change does not alter probabilities and the system is in equilibrium, neither will a large change. In other words, I feel that statistical methods and those involving integration in function space are powerful tools available for this work.

COWAN: 1 must confess to being rather sceptical of this line of attack. Noise is used to stabilize outputs of a system but then there is the problem of controlling the noise and making sure that the output is not a random function of the inputs. Suppose there is no noise, then, if for one element the result is what Von Neumann termed restoration, the result is trivial. If we are not dealing with one linear threshold element as this appears to be with a network, then the result is also not very interesting because the classic functions which can be obtained from such a network are trivial. You can only get 2” plus a small number of functions out of the 22n functions that one generally deals with in neural nets.

I have only considered systems in which you have excitation and what we call linear thresholds elements. There is fairly good evidence, although I hesitate to make the jump from neural nets to the nervous system, that there is a lot of inhibitions in the nervous system even at the single unit level. There are a lot of interactions of afferents in the nervous system and this gives a much richer set of functions to play with. In fact, you can get any kind of functional behavior out of a system. The nervous system does not appear to have this kind of simple linear elements, but in fact has a large collection of different kinds of elements : some monotonic, some anti-monotonic, all which are noisy.

ROSS ASHBY: The effect of inhibitory contributions has not yet been considered by us. WIENER: I think that one of the major problems is that the nervous system is

not working near a level of zero activity. Fluctuations about random activity must be considered which means that you cannot neglect the background activity.

Page 252: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I N S T A B I L I T Y O F S Y S T E M S 243

ROSS ASHBY: One point I would like to make is that if the pulses are not to be grossly inefficient as carriers of information, their distributions and arrivals must be largely independent statistically. But independence of arrival generates the binomial distribution in the numbers arriving in a given time, and it is the binomial shape, with a hump in the middle rather than a hollow, that causes the instability. So there is, in the case we are considering, a fundamental conflict between the need for efficiency as a channel of communication and the need for stability.

COWAN: Surely you are not interested in inputs which are carrying independence but in fact are carrying patterns.

ROSS ASHBY: If the nervous system is to be efficient, it must re-code the patterns, getting rid of the redundancy implied by the ‘pattern’, until the information is compressed into a more efficient form, which must be one whose variables vary in- dependently.

WIENER: I do not see any need of the assumption that the nervous system is working near maximum information carrying efficiency. I t probably takes tremendous waste to make it work at all. I think this assumption, that the nervous system is governed by maximum possible information carrying efficiency, is wrong from the beginning.

1 must disagree with you, unless we are talking about different things. I am not assuming and do not believe that the nervous system works with the efficiencies achieved by communication engineers : I was considering ‘efficiency’ only in the sense of avoiding the grossest redundancy.

Ross ASHBY:

WIENER: What reason have you to believe that there is no great redundancy? Ross ASHBY: In that case it would be highly efficient. It would be carrying up

to its maximal capacity. COWAN: The trouble with the informational analysis of the nervous system is a

communication channel. It completely neglects what is very basic, the engineering aspect of cost function. If the nervous system is regarded as a communication channel supplying energy to the system it may turn out that what is informationally a good thing to do, in fact, is an extremely bad thing to do. The important consideration is the energetic one rather than the informational one.

Ross ASHBY: I agree that both are important. When a rabbit, for instance, has to run for its life, the weight of one extr? gram of brain may be an appreciable handicap. If the species develops brains with an extra gram of substance, that gram must give at least a compensating advantage of better muscular coordination, or maneuverability, or learning the predator’s ways, etc. Because weight alone is a handicap, natural selection will incessantly press for the avoidance of gross inefficiencies in information processing.

WIENER: I t is more or less a teleological argument that we must be near the maximum efficiency informationally. We may be near the maximum we can do as an engineering job, and likely we are, but that involves the use of every bit of infor- mation.

Page 253: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

244

Mathema tical theory of the effects of cell structure

and of diffusion processes on the homeostasis and kinetics of the endocrine system with special reference to some periodic psychoses

N. RASHEVSKY

Committee on Mathematical Biology, University of Chicago, Chicago, 111. (US. A . )

In a series of important papers Danziger and Elmergreen (1954, 1956, 1957, 1958) have developed a theory of non-linear interactions in the endocrine system. In partic- ular they discussed the pituitary-thyroid interaction. They applied those studies to periodic catatonia, assuming that the periodic character of that disease reflects periodic relaxation oscillations in the endocrine system. The theory proved to have a clinical predictive value (Danziger and Elmergreen, 1958).

The idea that periodic catatonia may have something to do with periodic fluctuations in the endocrine system, especially in the thyroid gland, is strongly suggested by the findings of Gjessing (1939, 1953). He found that rhythmic changes of the basal metabolism occur in periodic catatonia. Moreover, he found that administration of thyroid hormone does stop the periodicity and may be used as a therapy for periodic catatonia.

That malfunctions of the endocrine system are likely to be reflected in malfunctions of the central nervous system is not surprising. Though the actual mechanism of the effects of internal secretions on the central nervous system is not yet known, effects of various endocrine imbalances upon mental phenomena are an established fact. It is therefore tempting to relate the periodic changes of the mental state of a patient with possible periodic changes in his endocrine system.

In physical systems periodic fluctuations of a variable may occur whenever the behavior of that variable is described by a differential equation of second or higher order. A simplest possible equation of that type is

- ax - d2x

d P - _

where a is a constant. The variation of x with respect to time is in this case a sustained undamped oscillation.

When we deal with chemical or biochemical systems, the variations of the amounts

Page 254: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HOMEOSTASIS A N D KINETICS I N ENDOCRINOLOGY 245

or of the concentrations of different substances involved are always represented by differential equations of first order. A single differential equation of first order, whether linear or non-linear, does not give periodic solutions, except in some very special and physically unplausible cases. However, a system of two or more differential equations of first order may lead to periodic solutions because, as is well known, any system of n differential equations of first order is mathematically equivalent to one differential equation of n-th order.

Thus, if we deal with the secretion, production, or destruction of a single substance, we cannot expect to find any periodicities in such phenomena. But if we deal with the interaction of several substances, such an interaction described by a system of differential equations of first order may lead to periodicities.

One disappointing situation is, however, found at once. If the system of first order differential equation is linear, then, even if periodic solutions do occur, they are in general damped, either positively or negatively. A positive damping means that the periodicities are not sustained, and that the system reaches eventually a state of equilibrium, in which the variables are independent of time. A negative damping means that the amplitudes of the oscillating variables increase with time to infinity. This means that such a system cannot exist. True enough, it is possible to find relations between the coefficients of this system of differential equations which, when satisfied, insure zero damping, and therefore, sustained oscillations. But those relations are represented by equalities between certain combinations of coefficients (Rashevsky, 1960). The coefficients in question represent certain physical, chemical or biological parameters. Hence, to insure sustained oscillations, those parameters must exactly satisfy certain equalities. But the probability that two physical, chemical or biological parameters are exactly equal, is infinitesimal. If, on the other hand, the equal- ities are not exactly satisjed, then there is always either a positive or negative damping.

The situation is different when the system of differential equations is non-linear. In some cases sustained periodic solutions occur in such systems regardless of the values of the coefficients. In other cases, the conditions for sustained oscillations are represented by inequalities. If an inequality is at all possible physically, then the probability of its occurrence is finite.

Any system of differential equations may be considered as approximately linear within a sufficiently small range of values of the variables. Therefore, when dealing with conditions of equilibrium, or conditions very near to equilibrium, a system of linear equations may be sufficient to describe the situation in any physico-chemical or biological system, in particular in the endocrine system (Roston, 1959). Whenever we deal with large ranges of variation of different biological parameters, and whenever such variations show any sustained periodicities, introduction of non-linearities is essential (Rapoport, 1952).

A type of non-linearity which is plausible biologically, and is of practical interest mathematically, is represented in Figs. la,b, by the full lines. The effect of one sub- stance, x, upon the rate of reaction y of another substance y, may be such as shown in Figs. la,b. The effect is pronounced for small values of x, but for large values of x

References p . 2.531254

Page 255: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

246 N. R A S H E V S K Y

the quantity y becomes independent of x. In Fig. la, the rate y increases with x, in Fig. lb, it decreases.

X O d X

Figs. l a + b. Representation of the effect a substance x may have on the rate of reaction k of another substance, j . (a), y increases with x ; (b), j decreases. Broken lines: linear approximation of the curves.

With sufficient approximation the curves of Figs. la,b, may be represented by the broken lines. With this approximation we may say that varies linearly with x in the range 0 < x < X O . It is independent of x for x > xo. But an independence is also a linear relation, with the coefficient of proportionality being equal to zero. Thus, from a mathematical point of view, the system of non-linear differential equations breaks up in two systems of linear ones. The first hold in the range 0 < x < X O , the second in the range x > X O . The two systems are of the same form, but some of the coefficients are different. We may now establish the inequalities between the coefficients which will lead to negative damping for x 4 xo, and inequalities which will lead to positive damping for x B X O . In that case there will be a value of x, which will represent the amplitude of undamped oscillations. The existence of such a value does not depend on the approximation shown in Figs. la,b, because by taking x sufficiently small and sufficiently large, the true situation will be represented by a linear approximation with any degree of accuracy.

In their first paper Danziger and Elmergreen (1954) considered as a first approxi- mation only the system thyroid-pituitary. This was, of course, an oversimplification,

Page 256: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HOMEOSTASIS A N D K I N E T I C S IN E N D O C R I N O L O G Y 247

just as the consideration of a two-body problem in celestial mechanics is an over- simplification. The thyroid hormone is known to inhbit the formation of the thy- rotropic hormone. An increase in the concentration 19 of thyroid hormone reduces the rate of formation of the thyrotropic hormone. Danziger and Elmergreen assume that the inhibitory effect follows a curve similar to the one shown in Fig. lb. Then they use the linear approximation shown by the broken line in Fig. 1 b. Let c and h denote two constants. Then define the function F(0) as follows:

F(0) = c - h0 for 0 < c/h F(0) = 0 for 0 3 c/h

Denote byp the concentration of thyrotropin, by 8 the concentration of thyroid in the system, by a, b, and g other constants, and by R the rate of introduction of exogeneous thyroid. Then the equations which describe the content of the theory are

d8

d t - _ - u p - b0 + R (3)

The function F(0) introduces a non-linearity into the system (2)-(3). By the method of the isoclines the authors readily show that for R = 0 the system (2)-(3) can exhibit only damped oscillations of p and 0 around equilibrium values". Thus, continued undamped periodicities as observed in periodic catatonia cannot occur. Those are ascribed to some malfunctioning of the system, the nature of this malfunctioning being left indefinite. Equations (2) and (3) show, however, that by introducing exogeneous thyroid in sufficient quantity, the whole mechanism may be brought to a standstill. If, namely, R is sufficiently large, then according to (1) and (2), dp/dt and therefore also p may be made zero. Then d0/dt also becomes zero, and the level of 0 becomes 8 = R/b. It is determined only by the exogeneously administered thyroid. Whatever the mechanism of the malfunctioning may be, which causes the undamped oscilla- tions, as soon as by a sufficiently large R the whole system is rendered inoperative, the mechanism of malfunctioning itself is eliminated. The thyroid level 8 remains constant, while p becomes zero. This gives an interpretation of the observations of Gjessing (1939) that symptoms of periodic catatonia may be eliminated by keeping the patient continuously on an appropriate administration of thyroid.

* Actually, in their first paper Danziger and Elmergreen (1954) used a set of equations different from (2) and (3), namely

d0 kz tnep ~ b8 _ _ _ _ _ _

d t 1 + nizp

It can be shown that in regard to periodicities the properties of this system of equations are the same as those of the system (2)-(3). The methods of isoclines, used by the authors for the system (2a)-(3a), can be used also for the system (2)-(3). The latter is obtained from the former by approxi- mating kl m18/(l + m10) by two segments of straight lines and by making m2 sufficiently small.

References p. 2531254

Page 257: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

248 N. R A S H E V S K Y

In their later theory (Danziger and Elmergreen, 1958) the authors use a more realistic model, in which thyrotropin activates the formation of an intermediate en- zyme E, which in turn activates catalytically the formation of 0. The basic equations are now:

-- dp - F ( O ) - g p dt

d E - _ - rnp - k E d t

d0 - _ - aE-bbB + R d t

(4)

Both by an approximate analytic method as well as by the use of a computer, it has been shown that within a definite range of the constants a, b, c, g, h, k, and m the system (4)-(6) describes undamped oscillations. Thus the occurrence of periodicities is now definitely linked to change in the above-mentioned constants. The conclusions drawn from the system (3)-(4) remain valid, but another important conclusion can be drawn. The administration of exogeneous thyroid is in general connected with the danger of temporarily overshooting the permissible safe level of 0. An administration of a drug which inhibits the formation of E, in other words reduces the constant m, will lower the level of 0. If such a drug is administered prior to the administration of thyroid, much larger doses of thyroid may then be administered safely. It was con- sidered that a goitrogen, for example propyl thiouracil, would act as inhibitor of the formation of 0 . The expected results were verified clinically.

It should, however, be remarked that the same conclusions could be reached from equations (2) and (3) assuming that the goitrogen reduces the coefficient a in equation

While the theory has proved to be useful clinically, it is subject to a serious objection on theoretical grounds. Both systems (2) and (3) as well as (4)-(6) describe a chain of reactions in a homogeneous system, whereas actually the system is highly heterogene- ous. The formation of the different components, such asp, 0, and E occurs at different sites, and the components are transported from site to site by the blood stream. This may change the form of the equations and affect the conclusions. Let us first investigate the simpler case, represented by the system (2)-(3).

The thyrotropic hormone is produced in the cells of the pituitary gland. For sim- plicity we may consider N p such cells, each being surrounded by the blood stream. This is too oversimplified a picture, but as a first step it may be justified.

Actually we should consider not only the interior of the cell and the blood, but also the intercellular fluid. The same argument as below can be applied to that more general case and leads to more complicated expressions. The basic conclusions, how- ever, which we discuss at the end of this paper remain the same. A similar consider- ation holds for the cells of the thyroid gland, the number of which we shall denote by No. The blood stream we shall consider as having a constant volume VO.

For the average concentration Ct of any substance which is produced or consumed

(3).

Page 258: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HOMEOSTASIS A N D KINETICS I N ENDOCRINOLOGY 249

in a cell at a rate q per unit volume and which is present in the surrounding medium in a concentration co we have quite generally (Rashevsky, 1960) :

- c - co

(7)

where A is a function of the size and shape of the cell, as well as of the permeability of the cell for that substance and of the diffusion coefficient of the substance inside and outside the cell. When, as in the blood stream, there is a considerable mixing, then we may put the outside diffusion coefficient as being infinite. We remark that for a finite size of the cell A = 0 if, and only if, both diffusion coefficients and the permeability are infinite. Since we already consider the external diffusion coefficient as infinite we have:

'4 = 0. iE. (h = 00 (8)

Before proceeding further we must specify somewhat more definitely the mechanism of action of thyrotropin on the formation of thyroid, and of the thyroid hormone on the inhibition of formation of thyrotropin. If we assume that this mechanism is purely catalytic so that in affecting the formation of thyrotropin the thyroid hormone is not constantly consumed in the cell of the pituitary gland, then the concentration 8, of the thyroid hormone inside the cells of the pituitary gland is the same as in the blood. Denoting the latter concentration by 8 b we have

op = 6 b (9)

Similarly, if the thyrotropin acts only as a catalyst in the formation of the thyroid hormone the concentration po of thyrotropin inside the cell of the thyroid gland is the same as the concentration pb of thyrotropin in the blood; thus:

- dF - _ d t '-- A

pe = p b (10)

If, on the other hand, the formation of the thyroid hormone involves some reaction which consumes thyrotropin, or if the inhibition of the formation of the thyrotropin involves a reaction which consumes the thyroid hormone, then equations (9) and (10) do not hold. This, as we shall presently see, results in an increase of the number of variables and of the number of equations. For simplicity we shall assume that the reactions described above are purely catalytic.

of thyrotropin inside a cell of the pituitary gland, the concentration s of the thyroid hormone inside the cell of the thyroid gland, and the concentrations pb and 8 b in the blood. For the rate of pro- duction of p we have now, because of (9) and of (2) the expression:

q p = F(ob)-g5 (1 1)

We have then four variables: the concentration

Hence, denoting by A, the value of A forp, we have from (7):

The last term on the right side of (12) represents the average outflow of thyrotropin ReJerences p . 2531254

Page 259: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

250 N. R A S H E V S K Y

per unit volume of the producing cell. If V, denotes the volume of each such cell, then the total outflow into the blood stream equals:

While in the blood stream the thyroid hormone and the thyrotropin do not interact, their destruction proceeds at the corresponding rates - gpb and - bob, per unit volume. Hence the total rate of destruction of pb is equal to - Vbgpb. On the other hand, expression (13) represents the total input of thyrotropin into the blood. Hence the rate of change of the total blood thyrotropin is

or d P b ~~ N p v p - N p v p f V b A p g

P - P b d t V b A p V b A p

By a similar argument we obtain now from equation (3)

and for dobldt we have:

Put

Then equations (12), (15), (16) and (17) may be written thus:

Instead of two equations, we now have four. Because of F(&) the system is non-linear. If we assume non-catalytic effects of thyrotropin on thyroid hormone, and vice versa,

Page 260: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HOMEOSTASIS A N D K I N E T I C S I N E N D O C R I N O L O G Y 25 I

equations (9) and (10) will not hold. We shall have six variables, s, e b , 8,, p, pb, PO, and two extra equations which will depend on the assumptions about the rates of consumption of thyrotropin in the thyroid gland, and of thyroid hormone in the pituitary gland.

We obtained the system (19)-(22) by keeping all the basic assumptions of Danziger and Elmergreen, but introducing the effect of heterogeneity which cannot be neglected.

By the same method as that used by Danziger and Elmergreen (1957), it can be shown that when certain inequalities between the quantities A, B, C , D, G and H in expressions (18) are satisfied, the system (19)-(22) has undamped periodic solutions (Elmergreen, personal communication). The details of this shall be given elsewhere.

In case of the system (4)-(6) we may assume, for simplicity, that E is produced in the cells of the thyroid gland and does not diffuse outward. We again assume that the effect of thyrotropin on the production of E is purely catalytic. In that case we obtain the following system of equations:

Bi; -

d i - e b

dr Ao -==,?-DO + - + R

Again it may be shown that for certain ranges of the parameters, the system (23)-(27) does have undamped periodic solutions.

The conclusions drawn from the system (2)-(3) in regard to the effects of adminis- tration of thyroid hormone remain valid for the situation described by the system (19)-(22). Similarly the conclusion drawn from the system (4)-(6) in regard to the effects of administration of a goitrogen and of thyroid hormone, remain valid for the situation described by the system (23)-(27). The effect of thyroid administration is due to the appearance of the function F(8b) in (19) and in (23). A sufficiently large exogene- ous input of thyroid hormone will increase e b to the value c/h. This, according to (1) will inhibit completely the formation of thyrotropin, which in turn will stop the formation of 8 and of E.

It is interesting that now, by taking into account the size, shape, and physical constants of cells, we find that sustained oscillations may occur even for the two- component system, thyroid-pituitary. The introduction of the intermediate enzyme E is now not necessary, so far as existence of sustained periodicities is concerned. The effects of administering a goitrogen for prevention of ‘thyroid overshoot’ must now be interpreted as meaning that the goitrogen reduces the coefficient a in (21). References p . 2531254

Page 261: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

252 N. R A S H E V S K Y

An important biological conclusion which may be of clinical significance, may be drawn from the above results. The systems (19)-(22) and (23)-(27) have undamped periodic solutions within given ranges of the parameters. The values of the parameters A, B, C, D, G, and H depend on A, and As according to (18). Moreover, both A , and As appear directly as parameters of the system. But A, is a function of the size and shape of those cells in the pituitary gland, which produce thyrotropin. The para- meter A , is also a function of the diffusion coefficient D, of thyrotropin in those cells, and of the permeability h, of the cell membranes to thyrotropin. Similarly, A0 is a function of the size and shape of those cells of the thyroid gland which produce the thyroid hormone. It is, moreover, a function of the diffusion coefficient Do of thyroid hormone in those cells and of the permeability ho of those cells to that hormone.

If the onset of undamped relaxations oscillations is produced by changes in any of the parameters of the systems (1 9)-(22) and (23)-(27), those changes may be produced by either a change in size or shape of the hormone producing cells, or by a change of thephysicalproperties of those cells which may affect D,, h,, Do, and he. An increased water content of the cells may, for example, result in an increase of their size, as well as modify the D’s and the h’s. A mechanical trauma, or tumor may result in a modifi- cation of the shape of the cells. Factors affecting the viscosity of the cytoplasm will affect the D’s and the h’s.

If a study of the systems (19)-(22) and (23)-(27) establishes the ranges of the para- meters or of their combinations, for which undamped oscillatory solutions appear (respectively do not appear), this will lead from the known expressions for the A’s to a set of algebraic equatiom which will show what modifications in size and shape of the cells and in the values of the D’s and h’s will bring the systems (19)-(22) and (23)-(27) into oscillatory state. Inversely, the equations will indicate what modifi- cations in the structures of the cells should be brought about by appropriate thera- peutic measures, in order to stabilize the system.

The thyroid therapy of periodic catatonia is based on the elimination of a faulty mechanism and on supplying, after such an elimination, the patient continuously with exogeneous thyroid. The above discussions indicate a line of research which may eventually lead to an actual cure of the disease, by restoring normal conditions in the organism.

In any case, the above considerations suggest a histological study of the pituitary and thyroid glands in patients with periodic catatonia. Such a suggestion is, of course, easier to make than to execute.

A very important point must be mentioned in conclusion. The systems (19)-(22) and (23)-(27) are based on the assumption that the transport of components is effected by free diffusion. Secretion of hormones, however, more than likely involves phenom- ena of active transport. Introduction of the latter will further complicate the equa- tions. This problem must be reserved for another study. Also, as we mentioned above, it will be necessary to consider a third phase - the intercellular fluid. But the im- portant conclusion remains. The size and shape of the cells or of the whole organ, as well as the purely physical parameters, such as diffusion coefficients and permeabilities, must play an important part in the production of periodicities in the endocrine system.

Page 262: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HOMEOSTASIS A N D KINETICS I N E N D O C R I N O L O G Y 253

Those conclusions lead to a number of suggestions for experiments which may become of importance clinically.

Since the presentation of this paper a further study of the system (19) to (22) has led to inequalities which must hold between the A’s in order that periodicities should not occur. It is found that large values of Ap and A. would prevent sustained periodic variations. This means that smaller values of the diffusion coefficients and perme- abilities, larger sizes of cells of the glands, and more rounded up shapes of these cells should be beneficial. The details will be published elsewhere.

ACKNOWLEDGEMENT

This work has been aided by a grant from Dr. Wallace C. and Clara A. Abbott Memorial Fund of the University of Chicago.

S U M M A R Y

In a series of papers, Danziger and Elmergreen have developed a mathematical theory of periodic phenomena in the endocrine system, based on the non-linearity of certain feedback biochemical circuits. These periodicities were assumed to be re- sponsible for such psychoses as periodic catatonia. Clinical applications of the theory have been made in connection with R. Gjessing’s work on the effects of thyroid administration. In their theory, however, Danziger and Elmergreen use, as a first approximation, such equations of biochemical kinetics which are valid only for homogeneous systems. In other words, they neglect the cellular structures of the endocrine organs and the effects of diffusion and permeability on the rates of bio- chemical reactions.

These effects are taken into consideration, still in an approximate manner, in the present paper. This generalized theory shows that the various parameters of the system which determine the character of periodicities depend not only on the bio- chemical reaction rates but also on the diffusion constants of the various reactants in the cell bodies.

They also depend on the permeabilities of various tissue membranes to those reactants and, what is of particular interest, on the size and shape of the cells which constitute an endocrine gland. If, as this evidence given by Danziger and Elmergreen indicates, periodic catatonia does reflect certain periodicities in the endocrine system, then a number of conclusions of potential practical importance follow from this paper. Not only does this theory suggest possible effects of drugs which affect the viscosity of the cytoplasm or the membrane permeabilities, but it also suggests the possibility of effects produced by purely mechanical trauma or tumors.

REFERENCES

DANZIGER, L., AND ELMERGREEN, G., (1954); Mathematical theory of periodic catatonia. Bulletin of

DANZIGER, L., AND ELMERGREEN, G., (1956); The thyroid-pituitary homeostatic mechanism. Bullerin Marhematical Biophysics, 16, 15-21,

of Mathematical Biophysics, 18, 1-1 3 .

Page 263: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

254 D I S C U S S I O N

DANZIGER, L., AND ELMERGREEN, G., (1 957); Mathematical models of endocrine systems. Bidleiin

DANZIGER, L., AND ELMERGREEN, G., (1958); Mechanism of periodic catatonia. Conjnia Neurologica,

GJESSING, R., (1939); Beitrage zur Kenntniss der Pathophysioiogie periodisch katatoner Zustiinde.

GJESSING, R., (1953); Beitrage zur Somatologie der periodischen Katatonie. Archiv fur Psychiatrie -

RAPOPORT, A., (1952); Periodicities in open linear systems with positive steady states. Bulletin of’

RASHEVSKY, N., ( I 960) ; Mathenlatical Biophysics: Physico- Matheniatical Foundations of Biology.

ROSTON, S . , (1959); Mathematical representation of some endocrinological systems. Birlletin of

of Mathematical Biophysics, 19, 9-1 8.

18, 159-166.

Archiv fiir Psychiatrie und Nervenkrankheiten, 109, 525-595.

Zeiischvift fi ir Neurologie, 191, I9 1-326.

Mathematical Biophysics, 14, 171-183.

Vol. 1, 3rd Edition. New York, Dover Publications, Inc.

Mathematical Biophysics, 21, 27 1-282.

DISCUSSION

WIENER: You are talking about periodic oscillations, however, they are not physio- logically typical. I n general, we have oscillations with a frequency range around a certain period and not too sharp periodic oscillations. Non-periodic oscillations are handled by the autocorrelation method. In addition, you have brought in the irregu- larity of the cell shape. This can be regarded as the sort of irregularity which you get with a statistical field. I am very much interested in physical problems, in starting with statistical fields, with transformations which leave the statistics invariant but not to the individual case and getting the autocorrelations. I am quite certain that methods of that type may find an application here. In other words, one can bring in the variation of the cell shape by a spatial stochastic process, and then go into the statistical dy- namics of the process. One can get the autocorrelation and the question of instability or the sustained oscillations. The oscillations then need not always be sustained at one frequency. I have a suspicion that when we use methods of this sort which are not precise, one may get more manageable, mathematical tools and be able to handle the questions of sustained oscillation or dying out oscillation in a more practical way.

RASHEVSKY: I believe that Professor Wiener’s suggestions are very valuable. I would like, of course, to see them in a somewhat more specific form. There are always several approaches to a problem, and it is usually desirable to approach every problem from different angles. As I pointed out, all I have done is to generalize the work of Danziger and Elmergreen, to take care of the non-homogeneity of the system, of its compartmental nature. Even this I have done only to a first approximation. Future research will show which of the different possible approaches is the most useful. I am inclined to venture the guess that in many respects the approach suggested by Professor Wiener and the one used by me will be found equivalent. For certain aspects of the problem one approach may be found to be more useful, for other aspects the other approach may be preferable.

I think that one can go directly back to the sort of work I have been doing from the autocorrelation to the spectrum. In this, the system must be considered as a field or a number of fields where the vibration is a statistical one. The steady state of such a system must be regarded as a statistical whole.

I am working on similar problems with gas theory. In this statistical methods

WIENER:

Page 264: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

HOMEOSTASIS A N D K I N E T I C S I N E N D O C R I N O L O G Y 255

depending on functions over space are used. I feel that this can be employed with advantage and probably with a great deal of simplification in this case. 1 think much of the complication will vanish if the point of view is completely statistical from the beginning.

RASHEVSKY: Again 1 must say that I agree with Professor Wiener in general. The importance of stochastic approaches to such problems is obvious. However, in some cases, individual cells may exhibit certain constancies. In such cases the simpler ap- proach presented today may be justified.

At present 1 am working with analogues of gasses where you have specific molecules of the gas. In this analogue you deal with continua and you can get an approximation. The phenomena of specific molecules are a certain invariant of space frequency but these space frequencies are limited. A theory which may be more usable than the classical theory of gasses can be constructed to predict a fairly good approximation of plasmas where you have long distance forces. To me, this strongly suggests that an approach to these fields may be the method of statistical jalsijication. For instance, in the field of endocrine gland cells, the shape of the indi- vidual cell would not be considered but correlations by distances and so on would. 1 think there are real possibilities for methods of this sort in many cases because you introduce this statistical element early. You may avoid some extremely complicated equations in this way.

RASHEVSKY: I would venture a prediction, Professor Wiener, that at some future date your approach and mine will be found to converge.

SCHOUTEN: What is the relation between your equations and Van Der Pol’s non-linear equation describing relaxation oscillations?

RASHEVSKY: Van Der Pol was one of the first to study possible biological ap- plicatiGns of relaxation oscillations. To the extent that nowadays the term relaxation oscillation denotes all types of non-linear oscillations, this work is related to that of Van Der Pol. A number of other cases of relaxation oscillations in biology have been studied, for example, Professor Thorsten Teorell in Uppsala is now studying relaxation oscillation i n membrane phenomena.

MERCER: I wish to criticize Dr. Rashevsky’s treatment on the grounds that he has neglected the most important changes occurring in the cells of organs linked by hormones. When two organs are linked by hormones transported from one to the other, the hormone of organ A causes the cells of organ B to differentiate and even to increase in number and thus to begin the production of their hormone. This in- fluences organ A. The periodic phenomenon probably arises as a result of the lag in time required to produce the second hormone i.e. for the cells to differentiate. These changes are much more fundamental than any envisaged in Dr. Rashevsky’s treat- ment.

RASHEVSKY: Effects of time lags as possible causes of oscillations in biology have been recently studied by Robert Rosen, from a topological point of view. Possible effects of time lags in the endocrine system have been recently studied by me, but not yet published. I do not believe, however, that effects of cell differentiation can play any essential role in the phenomena which I discussed today. In an adult organism

WIENER:

Page 265: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

256 D I S C U S S I O N

we do not observe differentiation and proliferation, at least not within such short periods as are dealt with in periodic catatonia. Such effects undoubtedly play a role in embryonic development. This, however, falls outside the scope of the present study.

WIENER: I would like to say here that I think that what you brought up is very important. Namely, the plasticity of the organs themselves and 1 think that for the longer time oscillations what you are saying has a very great consequence.

I quite agree with the foregoing speakers that you certainly have to introduce time in your equations and the integration of it. Autocorrelation or some other memory system will surely have to be introduced in order to solve the more relevant characteristics of your problems.

RASHEVSKY: Time enters explicitly into our equations. I would say that what is necessary to introduce, for long term phenomena, is some kind of hysteresis effects. Such effects have been studied by us in various publications in other fields of mathe- matical biology. A further generalization of our study will undoubtedly result in an introduction of some hysteresis effects. Memory is, as we have discussed elsewhere, a very special case of hysteresis.

VAN SOEST:

Page 266: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

257

Irrelevance of a ‘Nervous System’

0. D. W E L L S

Artorga Research Group, Beaulieu, Hants (Great-Britain)

It seems a bit irreverent to come among a group of neurophysiologists and talk about the irrelevance of a nervous system. It is something like studying animal structure and talking about tlic irrelevance of the heart. However, the purpose of this paper is only to try and define whether a nervous system is relevant or not to the amount ofemphasis now placed upon it.

The argument runs as follows : In spatial terms, the surviving species members are heavily weighed in number by those with no recognisable nervous system. These would include all uni-cellulars (protozoa, amoeba, etc.), bacteria, viruses, etc., many of the multi-cellulars, and the entire vegetable world. If we treat the organs of the body as quasi-icdependent organisms, the blood, liver, heart, etc. have no nervous system of their own. And if we draw the dividing line at the development of a central nervous system at, say, the stage of the earth worm, we have a still larger preponder- ance of species. Temporally, if we accept the guess of 2-4 billion years of evolutionary developnent, then the development of a nerve-net type goes back only about half a billion years, and of a central nervous system as only say 150 million years.

The further argument is that except for rapidity of transmission the functions of a nervous system are already inherent, even in uni-cellulars, where metabolic gradients, phase differences, differential permeability, etc. serve as fore-runners of a nervous system, with no marked break. The presence in all animals of a chemical transmission system in terms of the output of the ducted and ductless glands, chemical activators in the blood such as secretin, and the product of large molecules by other large molecules such as antibodies by gamma-globulin, or proteins by DNA-RNA, illus- trate that there are many other channels of transmission of information besides a nervous system.

A further argument suggests that an autonomic system and its control of the inside environment was antecedent to a central nervous system, and leads to the conclusion that the reference frame of a common outside environment came into existence only through the development of distance receptors, a central nervous system, and a language, which language maps the environment and must map the ‘inside’ environ- ment as a projection ‘outside’.

Our hypothesis then is that the antidote to over-emphasis of a central nervous system is to go far down in the evolutionary scale and, forgetting about emergent Refcrrnces p . 263

Page 267: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

258 0. D. W E L L S

properties for the time being, invent a description of how behaviour can be interpreted at the level of organisms with no recognisable nerve cells.

To start at the so-called beginning - there is the problem of the linking of ‘small’ molecules to make large molecules, which large molecules have the ability to replicate themselves. As these molecules become larger ‘shape’ seems to become more important than the specific character of particular groups. In other words, with increasing size, as long as the shape is preserved, some groups can be replaced without altering the overall function. As Penrose (1960) has pointed out, the parsimony of most natural processes suggests that it is uneconomic to have special templates for replication as in industrial manufacturing processes, because replication by cellular units would then also have to provide for replication of such templates. This conflicts with the Watson- Crick hypothesis of DNA replication. A further objection to the Watson-Crick hy- pothesis is that there is no provision for cut-off. If we say that such polymers, according to their number of units, contain a specified amount of information, and thus can be put into one-one correspondence with the cardinal numbers, it is necessary to describe a mechanism whereby replication will limit this size. Such a replicative mechanism by alignment or contact would involve, for example, a nucleotide of the old chain having a backward reaction on the previous nucleotide of the aligned or new chain. Then, with the non-existent n-plus-one nucleotide of the old chain, there would be no back reaction on the previous nucleotide of the new chain, and there would be a cut-off and perhaps release of the whole new chain. Because of this back reaction, alignment in the form of a helix would be the most probable configuration and determine the left or right-handedness of the helix. No template would be needed if the chains arc assumed to be amphoteric (as the proteins are). In other words, they are their own templates, and a special extra template is superfluous.

Going beyond the molecular stage, we consider ‘equi-finality’, ‘steady-state theory’ and the ‘directive correlation’ of Sommerhoff (1950, 1961, 1962).

Equi-finality, or the achieving of the same final state by starting from a variety of different states, is illustrated by many experiments in the interference of morphogenesis. In line with the open systems concept of Von Bertalanffy (1952) and others, equi- finality can be well explained by description of a steady state. One example would be the limitations of growth as due to a balance between aIsurface controlled anabolism and a volume controlled catabolism, so, no matter with what size of a particular type of organism one started, the same size of growth would be achieved. Tn such open systems the determining process would be a metabolic balance.

Tn the ‘directive correlation’ of Sommerhoff, we come across something really rather difficult. Tn his schema, drawn below, there is an attempt to show that many of the characteristics of the living can be proved to be physical system properties. He does this by enlarging his treatment to include time explicitly, and by insisting that the relations involved are at the minimum tetradic, and cannot be broken down into sets of dyadic relations.

Let me try to explain the schema. This is a generalised feedback descriptionLbut the feedback loop cannot be indicated, as the variables are functions of the time and the feedback is a propertyof the tetradic relation. E and R are epistemically indepen-

Page 268: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I R R E L E V A N C E O F A ‘ N E R V O U S S Y S T E M ’ 259

G

R C : Coenetic variable. E: Environmental circumstances. R : Response function. G: Goal or focal condition. ti t z

t : time. t0

dent variables, that is, they are independent at the instant t i . They are both functions of the coenetic variable C, and G is a function of E and R, i.e. E and R mutually compensate, so that G remains constant. There is an invariance relation between G and the set of virtual relations of the coenetic variable C. If the feedback could be said to be anywhere it could be said to be more in the lines CE and CR, which have a vague similarity to afferents.

The back reference period, that is ti - to , can be very short, as in the lag of many servo-mechanisms, or it can be extended to include descriptions of such biologic phenomena as ‘learning’, ‘memory’, and ‘phylo-genetic adaptation’.

The utility of this schema is not obvious at first glance, but two important points can be made.

( I ) By treating goal directed activities, both in mechanical devices such as range finders, and in organisms, under the same description, Sommerhoff disposes of the idea that vital organisation differs from a physical system property, and he gets to the heart of scch vital organisation by defining that which makes vital organisation different from other types of similar organisation, such as equilibrium seeking and some feedback control systems. The main difference being that in such similar systems the middle variables E and R are dependent at time ti.

(2) The second part is the insistence that the usual dyadic statements, as ‘E is adapted to B’, are ineffective, because they leave out the back reference period and variable C . The relation is tetradic, and cannot be treated as dyadic.

There are many who would quarrel with this, and say, as Weiner proved, that binary notation is the most economic for computers, that the weight of the authority of Aristotle and its use of dyadic thinking, such as ‘black or white’, the excluded middle, etc., has proved its efficacy over many centuries, and that any higher relations can be handled by functions of functions, or by multi-valued logics.

We do not agree. There is a crying need for a true calculus of multiple relations, starting with the triadic. Charles Peirce, in the middle of the last century, was one of the first to see this, but such a calculus is as yet undiscovered. We believe that until the need for it is explicitly pointed out and widely publicised, such a multiple relational calculus will not be invented.

To represent dyadic relations we have the simple line from

0-0-

References p . 263

Page 269: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

260 0. D. W E L L S

In the triadic a triple line would be needed

giving the possibility of tremendously greater linkage alternatives. To the sort of people who say: ‘We must get on with the job. The achievements of

science are relentlessly pushing back the frontiers of knowledge’, we would answer that, rather than getting on with the job, it might be better to pause and examine what job it is that we are getting on with. If we emphasise what has not been achieved, against what has, we might answer that it is almost degrading that as human beings we live our lives and die, without even a glimmer of what it is all about, any more than the house cat, resting near the television set, has any knowledge of its works.

If we look at what Bridgman (1959) calls ‘The Way Things Are’, and he cites the obvious dichotomy between the multiple private worlds of individual sensation and the public world with its assumed unity, there are so many anomalies that to make any competent description it would be necessary to start almost from scratch and devise a new approach which would include and explain these. For instance, what is the relation between the so-called ‘distal’ stimulus, or its image on the retina, and its representation in the field of vision? Can we assume anything but that the top of the room, say, is only recognised as ‘the top’ because cells in the lower part of the retina have a life of their own and recognise the topness of the room in the same way that touch cells on the leg, when stimulated, recognise that it is that part of the leg? This infers a differential ability in every receptor, or at the minimum, a grid of receptors. This is not taken account of at all in present theory, unless one assumes a hypothetical sensing matrix in the skull which takes in the information and then projects it outward again on to the wall of the room or the part of the leg stimulated. This is literally far-fetched, and no such mechanism has yet been chased down.

Again, mental images might be explained if there were discovered excitatory efferent innervation of sense receptors. There is some evidence for this, as cited by George (1961), but this would not explain the phantastic combinations and re-combinations of such mental images, as in imaginative planning or in dreams, and the unaccountable shifts from ‘outside reality’, as when ‘the lake looks blue’, to ‘inside’ when we say ‘the knife causes pain’. There seems to be back of all this a tremendously powerful and essential phantasy generator based on an organism environment non-dichotomy. There is here a danger to modern science, as the power of this generator can mount evidence for almost any belief. As Richelieu said : ‘Give me six lines from the writings of the most honest man, and T will find evidence therein to hang him’.

In Artorga Communication No. 38 (February, 1962) we proposed a set P of Particulate Observers (initially human). The behaviour of any such observer, Pt, is described as state-determined, i .e. , the set of variables describing the behaviour is a single valued function only of the previous state and of the particular time interval. Taking two observers, say Pi and Pj, we add an observation process, such that Pt observes the states of Pi and Pj the states of Pi. The ‘machinery’ and storage of such observation does not concern us here. It could be similar to translation and reverse

Page 270: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I R R E L E V A N C E O F A ‘ N E R V O U S SYSTEM’ 26 1

translation, or coding and decoding. The assumption is that such observations resulted in the recognition of similarities, or in the weak case regularities, in the behaviour of Pi as observed by Pi, and vice versa. The regularities of Pj as observed by Pi, and the regularities of Pi as observed by Pj form a set and their logical intersection, that is the regularities that they both have in common, can be called a language.

If we now consider N particulate observers ( P = 1, 2 . . . n), the intersection of regularities constitutes what we called in CommunicationNo. 38 Language 1 (Everyday English). This is brought in to show the history and development of a language. Now we assume that this language, L.1, is a set, statements in which act as parameters to the behaviour of the set P . (This set P should be termed Pe, an existent set; in contra- distinction to the set P h or historical set, in which the language originated. But to avoid subscripts we will speak hereafter only of set P, meaning the existent set.)

This Language I then is a great ‘teaching machine’. So, using an algebraical law of external composition, we can say, for observer Pi, that L.1 x Pi is mapped into Pr. In tabular form, all couples (L.1, Pi) are listed in the top row, all states of Pi are listed in the column on the left; their conjunctions give the states or results of the mapping.

We next assume that statements in L.Z (Everyday English) acting as parameters to the P’s appear to make sense to these P’s. In reality, due to its evolution as the inter- section of the regularities observed by the P’s, statements in this language have no xeaning, except in the context of such observed regularities.

To derive larger meanings out of statements in L.1 we have to posit an outside or semi-omniscient observer, D. This observer is at present undefined but You the Reader can be said in L.1 to act the role of this observer. It is possible that there will be no unchanging definition of D. As an unbounded, changing set it might be classed with the cognitive or evolutionary systems described by A. G. Pask. As such its elements may be said to form their own languages and to alter their reference frame. The elements of it will try to make sense out of statements in L.1, using a language L.2 (Bourbaki Set Theory), and as L.1 was formed by the mutual observations or regu- larities by the set P h , the elements of the set D, by the observation of regularities in itself, may form a language Ls (Self-Language), in which it can ‘make sense’ within its own system.

It may seem strange that no mention has been made so far of an ‘environment’. This has been purposely left out as confusing the issue. We are dealing here with a paper machine, with You as the Departiculate Observer D, trying to make sense out of the entire system of L.1 x Pi and Pi. If in any sense there is an environment at all, this is it.

From the angel’s, or semi-omniscient observer’s angle, what is termed in L.1 ‘en- vironment’ is only another language which has played its part in the evolution of L.Z by the PR’s. In this sense, all ‘objects’, visual and auditory stimuli such as television programmes, all skills, etc., can be called languages. There can be unrecordable languages of neurosis, conspiracy, or culture. There are two-person and many-person languages. What we are looking for might be termed a one-person language. So, as a speculation or thought experiment, we will rule out any ‘common environment’ as a reference frame (except as it has played its part in the evolution of L.1, Everyday Refrrences p . 263

Page 271: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

262 0. D. W E L L S

English, and here, of course, it never comes in explicitly but only through statements made in L.Z).

We are thus trying to invent a calculus, disregarding the goal of consistency, in which statements can be made which have some meaning to D (You the Reader) possibly by using L.2 (Bourbaki Set Theory), as a meta-language to describe L.Z, and obviously using L.l as a meta-language to describe D. D, then, is an envelope in which the reader can put his own meanings, and is thus somewhat akin to a propositional calculus, in which the variables can be given values and the statements ‘meaning’.

D can be envisaged as of unbounded size and complexity. Our attempts to interact with it may or may not produce regularities which can help us to evolve a language in the sense heretofore described. To interact with it we will build a series of models, and our criteria of success of interaction can only be that we sense some intuitive ‘reality’ or correspondence between the particular model and what we think of as D. In this way it is completely solipsistic. An example of such a model would be the Topological Psychology of Lewin. However, this particular model, with its attempts at a feeling space and reward and punishment vectors, suffered, as Ashby pointed out, from a lack of proper mathematical descriptive machinery.

Numerous nerve-net models based on the assumption in L.1 that a ‘common’ out- side environment is the only possible reference frame require matching machinery between such an environment and the nerve-net. We think this is an artefact due to this assumption. In a similar way, there is a ‘language’ of scientific experiments. And the protocols of these experiments require that only certain well-defined procedures be used. These can be also said in L.Z to be artefacts due to the assumptions involved. For instance, if we select a system in which sets of pigeons are ‘taught’ to peck at an illuminated figure, and with no reinforcement immediately peck at the same figure eighteen months later, we could say that learning and memory were obviously de- monstrated. However, by including the experimental technique as ‘part’ of our de- scription, we could also say that no memory was stored, since the presentation of the illuminated figure was necessary for the recall, and that it would be quibbling to say that the memory was not stored in samples of this figure. Also, one could say that no learning had taken place, since the ‘goal’ of obtaining grain was an objective system property of the pigeon-experimental-set-up mix, and this goal would be achieved in such a mix with no change necessary in the pigeon. You can send a package to its destination by air, rail or sea, with no essential change in the contents.

There are many other models which could be thought of to objectify D in a changing series. The utility of such would be that they would give for the first time to D (You, the Reader) an objective model of what You are really like, so that you could under- stand and communicate with yourself, and in this sense be your own psychiatrist.

The model we fancy at the moment is a type of phantasy generator mentioned in previous Artorga communications. In this model we posit a non-localised phantasy generator in which, using L.Z, all ‘experience’ is mapped into phantasy, and phantasy is continually mapped into ‘experience’. There is then in L.1 no essential difference between sleeping and waking. Also, if we accept the assumption of ‘common environ- ment’ as an artefact of L.1, we have a source of tremendous variety. Thorpe stated

Page 272: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

I R R E L E V A N C E O F A ‘ N E R V O U S SYSTEM’ 263

in an address before the British Philosophy of Science Group in the fall of 1961 that he did not think there was enough genetic variety to support the well-documented thesis that the song of a certain type of warbler was not learned but genetically de- termined. On the model proposed here, ‘looking at genes’ is part of a larger system, which system has the necessary variety. There is in L.l a sense in which we can say that if we look at genes or at a stained slide of the cerebellum or at a view of the Roman Forum, we are still looking only at our own brains. In this sense there is enormous variety inherent in a squash court with two players, two bats and a ball, and even more variety, from the point of view of a phantasy generator model, in a bare canvas, an artist’s brush and a palette.

S U M M A R Y

An over-emphasis has been placed on the nervous system, perhaps because the brain appears to be the only organ with enough structure to be the depository of the complex stores and centres needed to account for the corresponding complexities of behaviour. We believe that such stores and centres can be better accounted for by assuming an environment-organism mix. We suggest the employment of a black box type of super-observer linked with a well-defined language, like Bourbaki Set Theory, in which language can be discussed our ordinary language ‘Everyday English‘, called Language 1, which, in turn, is mapped into a set of particulate observers P (L.1 x Pi into Pi).

Or it might be fruitful to consider the antigen-antibody relation; the 10,000 items that Burnet (1956) suggests gamma-globulin ‘doesn’t know’, in other words, has learned to react against and to recognise as ‘not self’. The description of such activity would at least lead us to a possible conclusion that the ‘directively correlated’ activity of such large molecules would be comparable, not only with what is usually called ‘sensation’, but with ‘sight’ sensation, for instance as with mental images when the eyes are closed. Whether such mental images are similar to ‘vacuum activity’ or not, in the sense of the Lorenz-Tinbergen school, there is, as Russell (1961) points out, an analogy between the experimental ‘imprinting’ of animals shortly after birth and the experimental implantation of tolerance, usually in embryo.

R E F E R E N C E S

BRIDGMAN, P. W., (1959); The Way Things are. London, Harvard University Press. BURNET, SIR MACFARLANE, (1 956); Enzyme, Antigen and Virus. London, Cambridge University Press. GEORGE, F. H., (1961); The Brain as a Computer. London, New York, Pergamon Press. PENROSE, L. S., (1960); Development in the Theory of Self’Replicafion. London, Penguin Books, New

RUSSELL, W. M. S., (1961); Evolutionary Concepts in Behavioral Science. Ann Arbor, Mich., General

SOMMERHOFF, G., (1950); Analytical Biology. London, Oxford University Press. SOMMERHOFF, G., (1961); The anatomy of vital organisation. Artorga Communication, 36. SOMMERHOFF, G . , (1962); The Abstract Characteristics of Living Systems. Privately printed. VON BERTALANFFY, L., (1952); Problems of Life. London, Watts and Co.

Biology No. 3 I .

Systems Year Book No. VI.

Page 273: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

264

Epilogue

N. W I E N E R

Massachusetts Institute of Technology, Cambridge, Mass. ( U.S.A.)

I want to discuss a specific problem in cybernetic medicine. I am not going into the general discussion of the cybernetics of the nervous system. It is a tremendous problem which has been discussed by many others in this volume.

As a matter of fact, to some extent 1 am going to take the brain for granted in what I am going to discuss. I wish to discuss the manner in which impulses come into the brain and are taken out of the brain to action, the feedback loop that goes to sensation, motion, kinesthetic and other responses to the motion as sensation, and so around and around again. There are many methods for investigating this. This is a conditioned-reflex problem, and a great deal can be done either by observation or by animal experiments. Nevertheless there are some important tools which we don’t think of as being generally available for this problem, which I feel can contribute to a more useful and detailed understanding of it.

I had the idea years ago that the prosthesis and rehabilitation problem, the problem of restoring amputees to function, as well as the problem of restoring to function people who have lost one or more of their senses, was interesting not only as a humanitarian task. It is also interesting as a method of exploration in which we are directly faced by the need of not doing merely simple elementary tasks with the nervo-muscular sensory reflex loop, but of restoring a damaged apparatus as nearly as possible to normal function. It would be ideal if we could take a nerve that has been cut, put an electrode into the end of each fibre, put in impulses from the outside, take out motor impulses and restore the damaged apparatus that way. I don’t want to say that it never will be done, I do want to say that we are very far from doing it in a stable, viable way. Nerve tissue is delicate tissue, and damaged nerve tissue is almost always irreparable. Regeneration is a thing which we only understand to a limited extent. However, there are other ways of reparing damaged reflex arcs, say in an amputated hand.

The muscles which move the hand, except for a few muscles which spread the hand or contract it sideways, and have relatively small importance, are mostly in the fore- arm. An amputated muscle is still a muscle with its full innervation and even its full complement of kinesthetic sense-organs. It is a muscle, however, which cannot act directly. I had the idea several years ago to take an amputated muscle, pick up its action potential, amplify this up and make a motion o u t of it. I tried to get something

Page 274: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

E P I L O G U E 265

done with my idea, it was very difficult. The English were working on the same idea at the time. Recently the Russians have followed this idea up with a considerable amount of success. 1 believe that they admit that the idea is the same one which I introduced.

1 was at a meeting in Moscow a year ago last summer, in which they discussed an artificial hand which worked in this way. In this artificial hand they have two electrodes, one for the extensor muscles of the fore-arm, one for the flexor muscles of the fore-arm. They get action potentials out of these. These action potentials are amplified up and move motors in an artificial hand. The power for the motors comes in from batteries carried in a belt around the hips. These batteries, they say, are good enough for about fourteen hours of work before recharging. They have found quite a high degree of success in their work. I will admit that the hand they move is more or less of a claw The fingers are separate but the fineness of emotion is not great. You can’t get to much fineness of emotion from two electrodes. 1 will admit also that the problem of getting sensations back from this amputated limb of the artificial hand is one which, while they realize fully its importance, they either have not solved or have not gone very far in solving.

Last fall I had the misfortune to have an accident. I fell down a staircase and broke my h p and my fore-arm, and naturally I was at the hospital under treatment; and naturally I was in contact with orthopedic surgeons. Also naturally, I could not refrain from talking with them and presenting my hobbies, and I mentioned this idea about artificial limbs. At that time they had just had one of their people over in Russia, he was commissioned by our State Department to find out one thing and one thing only: what the Russians were doing with the artificial limb. 1 will admit that there were a good many details that he did not get for one reason or another, but he came back with the conviction that the work was successful, the ideas were sound, and that they were able to return some people again to work with these artificial limbs. Because of the fact that the nervous impulses were more or less like the normal nervous impulses, it was very easy to learn to use these limbs, for the same muscles were used even though their contraction was used not to pull something but to signal.

We started in my sick-room discussing plans for the future, and we built up a team of orthopedic surgeons, of neurologists and of engineers to carry this problem further, and we have had the good luck to secure the service and the financial backing of the Liberty Mutual Insurance Company. This organization is very much interested in rehabilitation problems.

In handling prosthesis from t h s point of view we run into a lot of new problems. In an amputation we can not consider a loss complete until not only the power has gone but the signal has gone as well. It is a commonplace of modern biological technique to take weak signals and build them up, and a signal of the order of milli- volts or less coming from the action poteiitial of a muscle can actuate (through a battery) a motor of a considerable amount of power. So one of the main points in our work is the shift of emphasis from the loss of power to the loss of signal and to the ability to replace the signal. This means more than it may seem. For example one of the things that is bothering us with what the Russians have done, and I think

Page 275: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

266 N. WIENER

it is bothering the Russians, is that two electrodes, one for the flexion, one for the extension, are not enough to actuate an arm well. In order to do a good job, we should get impulses from each muscle or even from slips of fibres of each muscle separately, which means in the long run implanting electrodes. Now we encounter all sorts of surgical questions. Is there any way of taking the wire from an implanted electrode through the skin without damage, without infection? It is not an easy problem, but the success that people have met in taking electrodes from the brain through the skull, say in cats in which such electrodes are implanted, suggest that it is not an insuperable problem. We have various ideas about how to do it. It may be necessary to take a slip of bone, fix it under the skin, draw a small hole through it, and use that to prevent the fine wire from tearing the skin. Some of our doctors have made a very good suggestion that we may use either electrostatic or electromagnetic induction across the skin, or the two combined as i n wireless, to carry messages out. At any rate the multiplication of messages is an essential part of a really optinium prosthesis. If a man is ever to play a piano well with an artificial hand, which I admit is a rather advanced goal, it will have to be the use of implanted electrodes.

Next, one of the greatest losses in a hand-amputation is sensory. The feedback from the action of the hand cannot adequately be made. Now there are several possibilities here. First let us take the sense of pressure. It is perfectly possible to make pressure gages in the artificial fingers, which will give an electrical impulse when there is pressure. These can be made to work various receptor organs on the skin, which will transfer the impulses to the nervous system. I suggest vibratory apparatus to avoid the accommodation problem, because accommodation is less for vibration than for ordinary pressure. That is going to be a difficult problem. It is quite conceivable that we may find ways of getting more directly to the nervous system than we realize, but my guess at present is that the best set is a vibration on the skin. This involves a whole lot of details. Before we begin this, we shall have to study electromyograms of mutilated muscles, of which people know very little". We shall have to study sensitivity of the skin to vibration and the differential sensitivity of different organs. You see there is an enormous physiological field.

Now comes the kinesthetic sense. For getting the most definite outputs for the electromyograms in muscles, it is best to use them under isometric contraction, where the length of the whole niuscle does not change, but where we still have local con- tractions. However, if you do that you have made a profound alteration in the sensations, and especially the kinesthetic sensations you get from contraction. Dr. Braitenberg made a suggestion to me that we might find a way to make the muscle follow the conctraction of the limb in its length so that the artificial limb as an ordinary muscle follows the limb so as to make the kinesthetic sensations as nearly as possible normal. I have been thinking over that problem.

Here is a wild idea. I don't know whether it will work. Suppose we took a long muscle and we denervated the distal part of it, the proximal part acted normally and the sense-organs would act normally according to the length. The distal part would

* I have found much more work in this than I suspected.

Page 276: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

E P I L O G U E 267

be stimulated in one way or another by the position of the artificial limb electrically. That might give a way to keep the tension of the proximal part about what it would be in the same position. I don’t say it will work. 1 want to show you the short of con- siderations that we need to take in to our work.

You see that this whole philosophy of signals is new in the matter of prosthesis. It will be necessary in this sort of prosthetic work for the amputation and the con- ditioning of the implantation electrodes and so 011, all to be done surgically with a view to the possibilities of this method and not tolthe previously existing artificial limbs.

Let us now take up the matter of the leg. Here we run into some very interesting problems of, I think, a fundamental nature, a CyberneticInature. They are concerned with walking. The hand is almost exclusively used for purely voluntary action: that is an action where each act is done separately. There are, however, actions in which this is not the case. Obviously a drummer has a periodic action that is got to be kept in phase and a pianoplayer also has this kind of an action, but 1 don’t think we are any way near the point of making artificial hands for drummers and pianoplayers. On the other hand, if we make artificial legs we must make them for walkers, and walking is an action where the end of one action starts the next, so that we do not have a purely voluntary action. This means then that the take-off for walking must be different from the take-off for stepping up, stepping down, getting out of a chair and so on, which are true voluntary actions. It has got to be patterned for a continual process. It has been found that in tendon grafting after a paralysis, to replace a muscle put out of action in the leg, it is very important that the tendon will be grafted to a new muscle which has more or less the right phase in the walking act. Otherwise the person, whatever he is able to do voluntarily, will do the wrong thing the moment he starts walking, and give it up. 1 suggest a double take-off for the walking action from the sound leg, making the extensors of one leg work the flexors of the other with probably a certain amount of delay. This is not enough, because if we did this alone the man could not stop walking. Therefore we shall have to have a voluntary action take-off which is not of a very high complication and can well be even with and above the amputation from the stump muscles of the amputated leg. Thus we have the problem of combining these two different take-offs. You see that we have here a beautiful lot of cybernetic problems which makes it practically necessary to study in detail things that previously could be studied in a casual way in the laboratory. Therefore I think that this work will add a great deal to neuroniuscular cybernetics, even though we are doing it with a particular humanitarian purpose. There is no way of learning how to make a watch quite like repairing one. We are trying to repair a damaged system, and in this we begin to realize the full complication of the system.

I n our research group I may say we found we can all talk together and that we can all use the same language. There has been no difficulty between the engineering group and the physiological-surgical group in understanding the problems. Now we are definitely going to work in a way so that if any patents are taken out it will merely be to put them in good hands for manufacturing, not to make any profit. By this I mean that we are definitely against having our project tied down with any profit making. We all share the same views on this including the Insurance Company.

Page 277: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

268 N. W I E N E R

In addition we will have no man on the project who does not make any attempt to understand both the medical and the engineering side. That is, we are in the most complete opposition to the need-to-know attitude, which prevails in a great many industries. We won’t touch a man who doesn’t try to understand what the other fellow is doing. The engineers at the beginning will be put on as observers on the physiological work. Our first job is learning what impulses are available from an amputated limb, and learning the possibilities of implanting electrodes, and learning the possibility of feedback from vibrators or other such feedbacks. Our first work is physiological, but the engineers will have to work later with the material. Therefore they should be on the job as apprentices to the physiologists at the beginning, to know precisely: (a) the signals they will have to work with; (b) just what is to be done, that is what restoring of functions means. They need to get both of these things from the physiologist and the neurologist. On the other hand, when they begin the design, the situation will be reversed. Then we shall regard the physiologist and neurologist as at the disposal of the engineers, to indicate to them various phenomena that we may not have thought of at the beginning, but are all-important for the engineering function.

We believe we are working with a well integrated group. I may say it is a very congenial group and it is a group of which I am very proud to be a member. 1 think this type of work is going to come more and more into cybernetics. This will be a part of the cybernetic medicine of the future, which will not merely consist of the cybernetic medicine of the laboratory, to find out things, but also of the cybernetic medicine of the engineer to repair things. I think that we never would have learned the fundamental basis of electrical engineering, or at least not nearly as much as we have done, if we hadn’t had to make and repair apparatus by means of it. I think this is also true here.

I will admit that we are not touching here some of the main problems of cybernetic medicine, even in the neuromuscular field. We are taking the brain for granted, After all, the brain is a better apparatus than we can make to replace it, and even in a damaged peripheral system, it is reasonable to use the brain as much as possible.

We shall have to learn a great deal about the possibility of replacing incoming sigmls by different incoming signals, about learned reflexes.

This work will have to ultimately go on hand in hand with what I consider, as all of you do, the deeper work about the neurophysiology of the brain itself. This is, I will admit, an immediate approach without the great deep problems which you have in the other field. I do not think, however, that it is a negligible approach to neuro- muscular physiology. What I suggest is that in all of our work not only in this field, the cybernetic medicine of therapeutics go hand in hand with the cybernetic medicine of observation. I think that we must be aware of the therapeutics as a most im- portant tool of observation.

Page 278: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

269

Author Index *

Albarde, P., 195 Alcaraz, M., 116 Allanson, J. T., 23 Altman, J. A., 116 Amassian, V. E., 13 Andrew, A. M., 63 Angyan, A. J., 178 Ashby, W. R., 23, 59, 63, 64, 66, 157, 179, 191,

Attneave, F., 80 Austin, G. A., 42 Autrum, H., 115, Avakyan, R. V., 116

222, 224, 225, 227, 229, 235, 236-241

Barbizet, J., 195 Barton, R., 232 Beer, S., 189, 199, 204 Bense, M., 79 Beurle, R. L., 23, 160, 179, 236 Blair, E. A., 12 Block, H. D., 23 Blum, G., 155 Bok, S. T., 87, 90, 160, 171, 178 Bonner, J. T., 191 Braines, S. N., 62, 63, 64, 66 Braitenberg, V., 160-176 Braithwaite, R. B., 203 Brazier, M. A. B., 115 Bridgman, P. W., 260 Broadbent, D. E., 178 Brock. L. G., 9 Brown, J., 178 Bruner, J. S., 42, 84 Buller, A. J., 13, 19 Burnet, Sir MacFarlane, 263 Bush, R. R., 178 Bykov, I., 53

Caianiello, E. R., 163, 189, 200 Cajal, S. R., 160, 164 Cameron, S. H., 179 Carnap, R., 180 Carreras, M., 117 Castells, C., 116 Chambers, W. W., 117 Chapman, B. L. M., 39, 40, 41, 42 Clark, J., 224-235 Clark, W. A., 165, 236

Clarke, A. D. B., 230 Clarke, A. M., 230 Cowan, J. D., 22-26 Cragg, B. G., 23 Crowder, N. A., 228

Danziger, L., 244, 246, 247, 248, 251 Del Castillo, J., 9 Desmedt, J. E., 116, 117

Elmergreen, G., 244, 246, 247, 248, 251 Erlanger, J., 12, 13, 19 Estes, W. K., 178 Eysenck, H. J., 178, 230

Fanjul, L., 116 Farley, B. G., 165, 236 Fatt, P., 9 Feigenbaum, E. A., 59 Fernhdez-Guardiola, A., 116 Finney, F. J. 13, 14 Frank, H., 79-96, 102, 103, 107 Frishkopf, L. S., 8, 19

Galambos, R., 116 Gasser, H., 13 Gelernter, H., 59, 64 George, F. H., 37-52, 178, 260 Gershuni, G. V., 116 Gjessing, R., 244, 247 Glaser, R., 227, 229 Goldman, S., 155, 158 Goodnow, J. J., 42 Gorke, W., 112 Goubeau, I., 79 Greene, P. H., 180 Grimsdale, R. I., 64 Grossman, R. G., 9, 13, 19 Giinther, G., 82, 183 GuzmBn-Flores, C., 11 6

Hagiwara, S., 9, 15 Harmon, L. D., 9, 178, 181 Hayek, F. A., 40 Hebb, D. O., 39,41, 44, 178 Hernindez-Pe6n, R., 116 Heyer, A. W., 40 Hick, W. E., 221

* Italics indicate the pages on which the paper of the author in these proceedings is printed.

Page 279: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

270 A U T H O R I N D E X

Hively, W., 230 HofstPtter, P. R., 82 Honerloh, H. J., 110, 1 I I Huant, E., 122-141 Hubel, D. H., 115 Hull, C. L., 178 Hyden, H., 178, 195 Hyman, R., 84

Infantellina, F., 5 5 Ivahnenko, A,, 179

Jones, V. W., 155

Kappers, C. U. Ariens, 160 Katz, B., 9 Kazmierczak, H., 112 Keidel, U. O., 117 Keidel, W. D., 117 Kemeny, J. G., 228 Kilburn, T., 64 Klaus, G., 94 Kozhevnikov, V. A,, 11 6 Kraft, H., 110, 111 Kreitman, N., 230

Lashley, K. S., 23 Latour, P. L., 30-36 Lauria, F., 162, 163 Lettvin, J . IJ., 115, 178, 182 Levitt, M., 117 Lewis, B. N., 192 Licklider, J. C. R., 216 Liddell, E. G. T., 45 Lindsley, D. B., 116 Liu, C. N.. 117 Livingston, R. B., 115 Luce, R. D., 80 Lumsdaine, A. A., 227, 229

MacKay, D. M., 179, 180, 192, 204 Maruseva, A. M., 116 Masterman, M., 183 Matthysse, S. W., 22 Matturana, H. R., 115, 182 Mayne, R., 221 McCarthy, J., 157 McCulloch, W. S., 22, 23, 1 15, 179, 180, I8 I , 182 Mechelse, K., 116 Merkel, J., 84 Miasnikov, A. L., 64 Miller, G. A., 84 Milner, P. M., 41, 44 Minsky, M. L., 180 Moles, A. A,, 79, 88, 158 Morrisscy, J., 230 Mosteller, F., 178 Miiller, P., 97-113

Muroga, S., 24 Mijers, R. E., 116

Nagel, E., 181 Napalkov, A. V., 59-69, 179 Nayrac, P., 215-223 Newell, A., 59, 81 Nigro, A., 53-58

Onesto, N., 162 Ordnance Corps, 237 Osgood, C. E., 40

Papert, D., 179 Pask, G., 39, 177-214, 224, 221, 228 Pauling, L., 22 Pavlov, I. P., 178 Paycha, F., 234 Pecher, C., 8, 9 Penfield, W., 45 Penrose, L. S., 258 Philipps, R. S., 221 Phillips, C. G., 45 Pitts, W., 23, 115, 181 Poletayer, J. A., 155 Polonsky, I., 195 Postman, L., 84 Pribram, K. H., 40 Price, W., 43 Pringle, J. W. S., 189, 194

Quastler, H., 22

Radionova, E. A., I16 Rapoport, A,, 245 Rashevsky, N., 9, 179, 205, 244-254 Rasmussen, T., 45 Rohracher, H., 80, 82 Roldan, E., 116 Rosen, R., 186, 189, 205, 206, 21 1 Rosenblatt, F., 178 Rosenblith, W. A,, 8 Roston, S., 245 Roy, A. E., 24 Rupert, A., 116 Russell, W. M. S., 263

Sainsbury, P., 230 Sauvan, J. L., 142-153 Schade, J. P., 1-7 Schopenhauer, A,, 81 Schreider, I. A., 66 Schwartzkopff, J., 89 Scrivener, J., 230 Selfridge, 0. G., 180 Shannon, C. E., 24, 157 Sheatz, G. C., 116 Sherrington. C. S., 178

Page 280: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

A U T H O R I N D E X 27 I

Sholl, D. A., 160, 172 Siegal, S., 178 Simon, H., 81 Skinner, B. F., 178, 228 Smith, M. J. A. A., 44, 47 Snell, J. L., 228 Sommerhoff, G., 258 Soroko, V. I., 116 Sperry, R. W., 118 Sprague, J. M., 117 Stanoulov, 154-159 Stark, L.. 9 Steinbuch, K., 82, 83, 85, 86, 90, 98, 103, 107,

Stellar, E., 117 Stewart, D. J., 37, 46 Summer, F. H., 64 Svechinsky, V. B., 62, 64, 66

109

Temperley, H. N. V., 23 Ten HooFen, M., 8-21 Thompson, G. L., 228 Towers, J., 230

Uttley, A. M., 23, 24, 25, 40, 160, 179

Verbeek, L. A. M., 24 Verveen, A. A., 8-21, 22 Viernstein, L. J., 9, 13, 19 Von Bertalanffy, L., 258 Von Foerster, H. M., 181, 182, 189, 236-241 Von Lube, F., 84 Von Neumann, J., 23, 47 Vota, U., 173, 174

Wagner, P. W., 112 Wailes, R., 224 Walker, C. C., 236-241 Walter, W. Grey, 178 Weiss, P., 22 Wells, 0. D., 257-263 Wiener, N., 1-7, 23, 61, 81, 94, 185, 264-268 Wiesel, T. N., I 15 Wigand, M. E., 117 Willis, G. D., 179 Winograd, S., 24

Young, J. Z. , 115

Zeman, J., 70-78 Zopf, G. W., (jr.), 114-119

Page 281: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

272

Subject Index

Acetylcholine, liberation, 217 yield, 195

Action potential, all or none nature, 26 all or none principle, 4 in nerve fiber, 8 propagation, 1

Activity, brain,

and motor control, 46 and sensory control, 46

convulsive, in complex epilepsy, 55 human, cerebral cortex as seat of, I54 mental, complex forms, 59 nervous,

complex forms, 59, 68 records of processes of man, 71

Adaptation, state, of brain stem reticular formation, 141 time, 129

Administration of pharmacological agents, 64

Afferences, on cortical level, 55 specific, 57

brain and, 59-69 of brain activity, analysis, 61 as computing procedure, 204 of self-organization, 61 of self-organizing systems, derangement, 66

of mental processes, 7 1

metabolism, 195

neural, of central nervous organization, 40

of algorithms of brain activity, 61 of behavior, 4 of cerebellar cortex, 7 of electroencephalograms, 75 frequency, 221 harmonic, of brain waves, 77 of networks with ideal neurons, 35 of structure of nervous networks, 63

accessory motor, of sensory systems, 6

Algorithm,

Allegories,

Amino acids,

Analogues,

Analysis,

Apparatus,

Application,

of adaptive machines, 235 of strychnine, on nerves, 8 of urethane, on nerves; 8

sensory cortical, 49 speech,

Area,

interaction with auditory stimuli, 49 interaction with visual stimuli, 49

visual, 49 Association,

centers, 152 Attention,

reaction, 82 and reticular formation, 117

Attitude, simulating behavioral, 95

Auditory nerve, activity, I16 response and reticular complex, 11 6

finite, and nervous system, 5, 37-52 redundant, 23 statistical theories, 27

of mental processes, 95

conduction velocity in, 175 myelinated, density, 173 pieces, 3

Automata,

Automation,

Axon,

Behavior, cybernetics of learning, 177-214 of human organisms, 140 human, simulation. 37 learning, models, 178 linguistic, 182 living, mechanical patterns, 222 models, 93 motivation, 61 sequential, of idealized neurons, 5, 36

Biocybernetics, definition, 1

Blindness, relative, period of, 36

Blood pressure, level, regulation, 66 rise, with regulating mechanisms, 65

activity, Brain,

analysis of algorithms of, 61

Page 282: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S U B J E C T I N D E X 273

complex forms, 68 electrical, 122-1 41 motor control, 46 sensory control, 46

capacity, 39 electrical stimulation, 54 electronic, and computers, 57 fields, formation of excitation, 74 functional fields in, 158 human, functioning, 39 information, and wave processes, 75 information processes of, 59-69, 70-78 learning matrix, 97-1 13 models, artificial mathematic, 78 physiological processes, 72 preservation of information, 155 reception of information, 155 retention of information, 155 transformation of information, 155 transmission of infornation, 77, I55 waves, encephalographic record, 71

harmonic analysis of, 77 Brain stem,

reticular formation, adaptation state, 141

Capacity,

Cell bodies, of brain, 158

of glial cells, 3 of neurons, 3

of association, 152 of integration, 152

Central nervous system, development, 257 malfunctions, 244 model, 24

Cerebellar cortex, analysis, 7

Cerebellum, molecular layer, structure, 7, 162 neurons in, 122

conditioning in, 217 functional overlapping in, 45 mechanism, 53 as seat of intellectual human activity, 154 scheme, 5, 58 structural features, 154

optic, photic response at, 116

Centers,

Cerebral cortex,

Chiasm,

characteristic habituation, 1 16 Circuits,

reverberating, 54 in cortex, 167 in nervous system, 45

Classification, in human sensory system, 43

Cochlea,

Code, hair cells of, 120

binary, 100 genetic, of types of connection, 43 informational, 71

cybernetic models, 7

density, 173

reticular, and auditory nerve response, 116

digital, general purpose, 38, 50 and electronic brains, 57 redundant, 25

Concentration, RNA, 195

Conditioned reflex, formation of autonomous chains, 62 inhibition, 53-58 mechanisms, 5, 53-58 pathological systems, 66 treatment, 4

Conditioning, of nervous system, 2 and reticular formation, 117

slow decremental, in dendrites, 163 speed of, along nerve fibers, 38 velocity, 124

Cognition,

Collaterals,

Complex,

Computer,

Conduction,

in axons, 175

excitatory, 162

inhibitory, 41

synaptic, increase, 41 types of, genetic code, 42

Connectivity, of neurons, 179

Consciousness, degree, 40 information, 71 theory, 82

Construction, mathematical models of cortical structure, 25

Contacts, afferent, of neurons, 164 axo-dendritic, 164 efferent, of neurons, 164

of speech, 70

isometric, 266

motor, of brain activities, 46 sensory, of brain activities, 46

Connections,

histology, 162

histology, 162

Content,

Contraction,

Control,

Page 283: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

274 S U B J E C T I N D E X

Control (continuation)

Conversion.

Cortex,

voluntary, of noxious stimulus, 231

into probits, 14

cerebral, complex, 3 scheme, 5

limited regions of, equipotentiality, 23 reverberating circuits in, 167 visual, information from retina to, 39

randomness, 22

pathologic phenomena, 5

biocybernetics, definition, 1 medical cybernetics, definition, 1 neurocybernetics, definition, 1, 7

excitability, 9

viscosity, 253

Cortical areas,

Cybernetic explanation,

Cybernetics,

Cycles,

Cytoplasm,

Damage,

Decrement, of perceptrons, 23

in dendrites, 175 in synapses, I75

synaptic, 30, 31

decrement, 175 distribution, 167 excitation, decrementally conducted, 162 slow decremental conduction, 163

of collaterals, 173 of excitation, 165 of myelin preparations, 173 of myelinated axons, 173 of neurons, 164

neural, patterns, 19

of axonal terminations, 167 of dendrites, 167

memory and, 143 replication, hypothesis, 258

Delay,

Dendrites,

Density,

Discharge,

Distribution,

DNA,

Efficiency,

Electroencephalogram, inhibition, in conditioned stimuli, 55

a-rhythms, and sensory information, 6 decrease of amplitudc, 130

analysis, 75 auditory stimuli, influence, 128

energy, 131 expression of brain writing, 73 manifestations of brain writing, 72 periodic patterns in, 36 topography, 129 visual stimuli, influence, 128

u-waves. 126, 127 /&waves, 127

Endocrine system, autoregulation, 137 cybernetical aspects, 138 homeostasis and, 139 kinetics, 244 malfunctions, 244 non-linear interactions, 144 oscillation, 138, 244

creation, 148 logic, 143 structure, 146

complex, convulsive activity, 55

of limited regions of cortex, 23

computational, 24 synaptic, 24

Excitability, cycle, 9 fluctuation, 5, 8-21

Excitation, in dendrites, decrementally conducted, I62 density, 165 distribution by retina, to visual cortex, 40 formation, in brain fields, 74 nerve fibers, 223 noise factor in, 8 of optic nerve, 74 of points of rctina, 74 produced by neurons, I64 units of, threshold of neurons in, 30 visual, 126

Experience, effects, 43 motor and sensory, 40

movements, elicited by, 36, 40

Electroencephalography,

Engram,

Epilepsy,

Equipotentiality,

Error,

Extrapyramidal motor system,

Fatigue,

Features,

Feed back,

neural, 41

structural, of cerebral cortex, I54

channel, 157 compensatory, 140 description, 80 discriminatory, 115

Page 284: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

SUBJECT I N D E X 275

internal, 132 negative, 222

A-, in sciatic frog nerve, 9 cortico-cortical, long, 41 efferent,

Fiber,

ending on receptor cells, 118 to sense-organs, 120

ending on cells, excitatory, 37 inhibitory, 37

excitatory, long, 41 inhibitory, local, 41 nerve, excitation, 223 output-, from Purkinje neurons, 161

dendritic, 164 volume, 165

functional, in brain, 158

synchronization, 46

and local potential, 9 of membrane potential, 19 of peripheral nerve fibers, 22

neuronal, 5 , 8-21

characteristics of nervous system, 38

of neural networks, 181

of human brain, 39

Fields,

Firing,

Fluctuation,

Fluctuation in excitability,

Fractionation,

Framework,

Functioning,

Ganglion,

Gaussian distribution function, 5, 8 Glial cells,

bodies of, 3 Golgi staining process, 3 Grey substance,

structures, 6 Growth,

learning during, 47

cell, in retina, 120

Habituation, of photic response at optic chiasm, 116 and reticular formation, 1 I7

and EEG, 132 by Mescaline, 129 structure, 133 visual, 132

excitatory connections, 162 inhibitory connections, 162

in perceptual processes, 117

Hallucinations,

Histology,

Homeostasis,

of sense-organs, 6 sensory, 114-120

thyrotropic, formation, 247

motivational system in, 47

psychical traumatization in development, 64

effect of X-rays, 137

Hormones,

Human beings,

Hypertensive disease,

Hypophysis,

Impulse, afferent, unconditioned, 58 centrifugal, 57 excitatory, 30 inhibitory, 30 nerve, signals as bearers of information, 155 nervous, 39 pattern, 194

hypnotic, 230

amount of, 122 brain, and wave processes, 75 channels of transmission, 257 characteristics of probability, 72 coding, 85 conceptual, 122-1 4 1 of consciousness, 71 conserving, 71 in mental messages, 157 motivational, source, 48 nerve impulse signals as bearers of, 155 preservation, in brain, 155 processing, 71 psychology, 6, 82 reception, in brain, 155 recording, 71 reducing capabilities of sensory systems, 1 15 retention of, in brain, 155 from retina to visual cortex, 39 semantic, 85 sensory, 122-141

Induction,

Information,

and a-rhythms of EEG, 6 from psychiatric point of view, 6

and memory, 2 storage, 89

stored in memory, structure, 60 transformation, in brain, 155 transmission, in brain, 77, 155 transmitting, 71

Information theory, human thought and, 155, 159 psychology and, 79-96

of efficiency, in conditioned stimuli, 55 of formation of thyrotropin, 249 reciprocal, 231

Inhibition,

Page 285: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

276 S U B J E C T I N D E X

Innervation, of accessory motor apparati of sensory sys- tems, 1 I8 excitatory efferent, of sense receptors, 260

sensory, modified by nervous system, 40

centers, 152 of nervous activity, mechanism, 22

across membranes, 38 non-linear, in endocrine system, 244 pituitary--thyroid, 244

stimulation, as pathological signals, 67

of nervous system, 257

Inputs,

Integration,

Interactions,

Interoceptors,

Irrelevance,

Kinetics, of endocrine system, 244

Language, logical structure, 70 translations, informational machine, 77 universal scientific, 77

time, 129

behavior, models, 178 complex form of mental activity, 59 cybernetic models of, 7 experimental tests for, 182 during growth, 47 mechanism, 178, 189 models, 97-1 13, 177-214

classification, 113 knowing phase, 99, 11 3 learning phase, 99, 11 3 non-binary, 1 I 3 for pattern recognition processes, 6, 95 perceptive, 1 13 principle, 98 properties, 97-1 13

cortical, afferences on, 55 minimum metabolic, 193

and motivating systems, 41 and reticular formation, 40

Latency,

Learning,

Learning, matrix,

Level,

Limbic system,

primary energiser of conceptual system, 48 Logic,

of nerve nets, 6

Machine, adaptive, 7

application, 235 in psychiatry, 224-235

causal, 28 informational, language translations, 77 learning, 3 translating, 3

theory, 70 Malfunction,

of central nervous system, 244 of endocrine system, 244

of cell structure, 244 of diffusion processes, 244

of nervous system, 2

learning, 97-1 13

basic, of memory, 6 of cerebral cortex, 53 of conditioned reflexes, 5 inborn control, operation, 68 for integration of nervous activity, 22 for learning, 178, 189 memory, 154, 193, 195 of physiological process of thinking, 154 regulating, rise in blood pressure and, 65

Membrane potential, fluctuations, 19 threshold, 9 variations, 8

depolarization, 21 6 excitation, 216 interactions across, 38 permeability, 253

active, 142-153 basic mechanism of, 6 capacity, 142-153 cybernetic model, 143-1 53

construction, 144 human, parameters, 6 information stored in, structure, 60 long term, 2 mechanism, 154, 193, 195 short term, 2

model for, 178 storage of information, 2 structure, 135, 152

projection of actual reality, 71

hallucinations, 129

mental, information, 157 transfer, 1 transmission, across synapse, 1

analogue type, 39

Mathematical theory,

Mat he mat ics,

Matrix,

Mechanism,

Membranes,

Memory,

Mental picture,

Mescaline,

Messages,

Models,

Page 286: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S U B J E C T I N D E X 277

brain, artificial mathematic, 78 of central nervous system, 24 conceptual, 6, 95 cybernetic,

of cognition, 7 of learning, 7

digital type, 39 of learning, 177-214 mathematical, of cortical structure, 25 nerve-net, 262 neuron, electronic, 9 perceptual, 6 for short term memory, 178 specific, for neural organizations, 24

Molecules, RNA, 195

Motivation, of behavior, 61

Motor neurons, axial, 57

Movement, of eye, 36

Muscle spindles, 13, 120 Myelin,

preparations, density, 173 quantitative assessment, 7

biochemical changes, 38 cells, refractory period, 38 electrical changes, 38 of human nervous system, 38 model, experiments, 8-21 nets, logic of, 6 networks, 83 optic, excitation, 74 strychnine treated, 19

conduction, speed, 38 excitation, 223 ‘noise’ characteristics, 20 peripheral, fluctuations, 22 recovery period, 10 stimulated, 8

Nervous activity, mechanisms for integration, 22

Nervous system, actual, 37 characteristics,

Nerve,

Nerve fibers,

fractionation, 38 recruitment, 38 summation, 38

coding of information, 89 conceptual, 5, 37

physiology, 50 conditioning, 2 and finite automata, 5 human, nerves of, 38 irrelevance of, 257

learning matrix, 97-1 13 mathematics of, 2 reverberating circuits in, 45 sensory inputs, 40 storage of information, 89 structure of, 44 and telecommunication, 79-96

of adaptive neurons, 193 consisting of neurons, 31 with ideal neurons, analysis, 35 linear, sequential, 5 of malleable neurons, 193 nervous,

analysis of structure, 63 theory, 35

framework, 18 1 theory, 70

Neurocybernetics, outline, 1-7

Neurons, adaptive, network, 193 afferent contacts, 164 all or none law, 2, 6 artificial, population, 179 auditory, first order, 19 cell bodies of, 3 of cochlear nucleus, 13 connectivities, 179 dendritic spines of, 164 density, 164 efferent, 164

inhibition, 87 excitability, 217 excitation produced by, 164 facilitation, 217 ideal,

Networks,

neural,

analysis of networks, 35 of McCulloch-Pitts type, 30

idealized, sequential behavior, 5, 36 inhibition, 217 interconnectivity, 22 malleable, network of, 193 memory and, 143 in midbrain reticular formation, activity, 13 model, electronic, 9 network consisting of, 3 1 primary sensory, 1 15 Purkinje, output fibers of, 161 receptor, 115 sensory, secondary and higher, 120 as switch, 97 as synchronous unit, 30-36 threshold, 165

in units of excitation, 30 Nodes of Ranvier,

isolated, 5

Page 287: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

278 S U B J E C T I N D E X

Nodes of Ranvier (continuation)

Noise, A-fiber in sciatic frog nerve, 9

charasteristics, in nerve fibers, 20 factor, in excitation, 8

cochlear, neurons of, I 3 Nucleus,

Olfactory system, 120 Organization,

central nervous, neuronal analogues, 40 of dendritic plexus, 3 neural, specific models, 24

Oscillation, in endocrine system, 138, 244 modes, 2

Overlapping, functional, in cerebral cortex, 45

Pathways, afferent,

sensory, 11 5 synapses, 1 16

efferent, to peripheral receptor organs, I18

impulse, 194 of learning matrices, 6 mechanical, of living behavior, 222 neural discharge, 19 periodic, in EEC, 36 recognition, complex form, mental activity, 59

of cold, 34 facilitation, 130 inhibition, 130 of light, 127 model, 81, 93 sensations into, 74 sensory, theories, 1 I5 transmission of vibration by nerve tissue, 73 visual,

Patterns,

Perception,

holes, 36 threshold for, 36

of warmth, 34 Perceptrons,

damage of, 23 Perikaryon,

volume, 164 Perrneabili ty,

of cell, 249 membrane, 253

pathologic, cybernetic explanation, 5

-thyroid interaction, 244

dendritic, organization, 3

Phenomena,

Pituitary,

Plexus,

Population,

of artificial neurons, 179

electrical relations between, 46 local and fluctuation, 9

characteristics, and information, 72 intensity function, 14 intensity relation, 15 of response, 5

conversion into, 14

solving, complex form of mental activity, 59

computing, algorithm as, 204

dendritic, 164 diffusion, mathematical theory, 244 Colgi staining, 3 mental,

allegories, 71 automation, 95

Potential,

Probability,

Probi ts,

Problems,

Procedure,

Process(es)

pattern recognition, learning matrix for, 6, 95 perceptual, homeostasis and, 1 1 7 physiological, in brain, 72 psychical, resonance theory, 73 records, of man’s nervous activities, 71 wave, and brain information, 75

sensory, theories, 115

of action potential, 1

adaptive machines in, 224-235

information, 6 and information theory, 79-96

Psychosis, periodic, 244

Purkinje neurons, dendritic ramifications, 170 output fibers from, 161

Pyramidal cells, branches, axon of, 168 sizes, 161, 171

Processing,

Propagation,

Psychiatry,

Psychology,

Ramifications, axonal, 164 dendritic, of Purkinje cells, 170

Randomness, in cortical areas, 22

Receptor, auditory, 123 differentiation, 122 information, 122 sense, excitatory efferent innervation, 260 visual, 123

Receptor cells, efferent fibers ending on, 6, 118

Page 288: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

S U B J E C T I N D E X 279

Receptor organs,

Record, peripheral, efferent pathways to, 11 8

encephalographic, in acoustic waves of speech, 71 of brain waves, 71

Recovery period, fluctuation in excitability in, 8 of nerve fiber, 10

characteristics of nervous system, 38

amount of, 84

models, 81 responses of organism, 60 stretch, 217

Refractory period, of nerve cells, 38 relative, 10

Regulation, of blood pressure, 66

Reinforcement, principle, 61 theory, 48

between electrical potentials, 46

skin, changes in, 23 1 synaptic, lowered, 4 I

average ‘evoked’, 30 cortical, to auditory stimulation, 116 photic, at optic chiasm, I16

characteristic habituation of, 116 probability, 5 reflex, of organism, 60

Reticular formation, attention and, 1 I7 conditioning and, 117 brain stem, adaptation state, 141 diencephalic-mesencephalic, in sensory con-

habituation and, 117 limbic system, and primary energiser of con-

ceptual system, 48 midbrain, neurons in, activity, 13

bipolar cells in, 120 distribution of excitation to visual cortex, 49 ganglion cells in, 120 information from, to visual cortex, 39 points, excitation, 74

Reverberating circuits, in cortex, 167 in nervous system, 45

concentration, 195

Recruitment,

Redundancy,

Reflex,

Relations,

Resistance,

Response,

trol, 116

Retina,

RNA,

memory and, 143 molecules, 195 synthesis, 195

Scheme, of cerebral cortex, 5, 58 functional, of processes of thinking, 154

Self-awareness, degree, 40

Self-organization, algorithms and, 61

Sensation, into perception, 74 visual, 218

kinesthetic, 266

efferent fibers to, 120 homeostasis, 6 message, 82 proprioceptive, 122

auditory, 132 visual, 132

Sensory system, accessory motor apparatus, 6

classification, 43 Signal processing,

bits per sec, by neurons, 22 Simulation,

of human behavior, 37 Speech,

acoustic waves, encephalographic record in, 71 content, 70

dendritic, of neuron, 164

dendrites, 170

auditory, cortical response to, 1 1 6 electrical, of brain, 45 of interoceptors as pathological signals, 67 of nerves, 64 of nervous centers, 64 sensory ,

Sense,

Sense-organs,

Sensory stimulation,

innervation, 1 18

Spine,

Stellate cells,

Stimulation,

auditory, 132 visual, 132

Stimulus, acoustic, 87 auditory, 149 combination, 87 conditioned,

inhibiting action, 53 inhibition of efficiency. 55

intensity, 5 noxious, voluntary control, 231 olfactory, 86, 87

Page 289: Progress in Brain Research Volume 2:Nerve,Brain and Memory Models

280 S U B J E C T I N D E X

Stimulus (continuation) sound, 218 visual, 49

of information, memory and, 2 system, 48

cell, mathematical theory, 244 cortical, construction of mathematical model,

of grey substances, 6 of molecular layer of cerebellum, 7, 162 of nervous networks, analysis, 63 of nervous system, 44 of thinking and language, 70

application, 8 nerve, treated with, 19

characteristics of nervous system, 39

decrement, 175 excitatory, morphological distinction, 175 inhibitory, morphological distinction, 175 ions at, 122 nature,

Storage,

Structure,

5, 25

Strychnine,

Summation,

Synapses,

excitatory, 162 inhibitory, 162

transmission of messages across, 1 Synchronization,

of firing, 46 Synthesis,

macromolecular, bits per sec in, 22 RNA, 195

chemical transmission, 257 conceptual, primary energiser, 48 endocrine,

System,

autostimulation, 137 cybernetical aspects, 138 homeostasis and, 138 oscillation in, 138

and reticular formation, 40 limbic, 41

motivational, in human beings, 47 nervous,

conceptual, 5

conditioning, 2 finite automata and, 5 mathematics, 2

olfactory, 120 pathological, of conditioned reflexes, 66 pathological control, formation, 67 reticular, 41 self-organizing control, 61, 68.

sensory, information reducing capabilities, 1 I5 storage, 48

derangement, 66

Telecommunication, and information psychology, 6 and psychology, 79-96. models, 79-96

of messages, 1

information in brain, 155

messages, across synapses, I nervous, 1 system, chemical, 257

axonal, 168 dendritic, 168

Transfer,

Transmission,

channels, 257

Tree,

Unit, of excitation, threshold of neurons, 30 synchronous, neuron as, 30-36

application, 8 Urethane,

Vibration, of cortical cells, 74 transmission, by nerve tissue, 73

of cytoplasm, 253

distributions of excitation by retina, 40 information to, from retina, 39

of perikaryon, 16

Viscosity,

Visual cortex,

Volume,

X-Ray, effect on hypophysis, 137

P R I N T E D I N THE NETHERLANDS

B Y NO OR D-NE DER LA N DSE D R U K K E RIJ, ME P PE L