What can automaton theory tell us about the brain?

Download What can automaton theory tell us about the brain?

Post on 21-Jun-2016




2 download

Embed Size (px)


<ul><li><p>Physica D 45 (1990) 205-207 North-Holland </p><p>BIOLOGY </p><p>WHAT CAN AUTOMATON THEORY TELL US ABOUT THE BRAIN? </p><p>Jonathan D. V ICTOR Department of Neurology, Cornell University Medical College, 1300 York Avenue, New York, NY 10021, USA and Laboratory of Biophysics, The Rockefeller University, 1230 York Avenue, New York, NY 10021, USA </p><p>Received 13 September 1989 Revised manuscript received 11 January 1990 </p><p>The potential contribution of the theory of cellular automata to understanding of the brain is considered. </p><p>1. In t roduct ion </p><p>How the brain works is one of the fundamental problems in biology. Despite dramatic advances in understanding at the molecular and cellular level, a number of basic issues remain. These include, for example, the mechanism for the determination of neural connectivity by genetic and epigenetic fac- tors, and the basis for the difference between the intellectual capacity of the human brain and that of other species. Satisfactory answers require in- tegration of understanding at multiple levels of structure into a coherent whole. Thus, a theoreti- cal framework, as well as experiment, is required. But what can one reasonably expect from theory, and what is its relationship to experimental inves- tigations? With these questions in mind, I would like to outline areas of inquiry in which I believe that theoretical insights will make significant con- tributions towards understanding the brain. </p><p>The best place to begin is to consider the land- mark contribution of yon Neumann [1]. Von Neu- mann constructed a cellular automaton capable of universal computation and self-reproduction. This construction demonstrated that a small set of local rules acting on a large repetitive array can result in a structure with very complex behavior. The von Neumann construction thus immediately sug- gests how an organ with behavior as complex as the brain's can be specified from limited genetic information. </p><p>That each unit of the von Neumann automa- ton had twenty-nine states and four neighbors is inessential to its biologic import; what is impor- tant is that the construction could be done at all. The property of universal computation is only in- directly important: we do not need to think of the brain as a universal ~hring machine in order to apply the cellular automaton metaphor; universal computation is simply a rigorous way to guarantee that the von Neumann construction has behavior that, all would agree, is complex. The property of self-reproduction is relevant in a similar fashion, in that it is a rigorous way to guarantee a rich behavioral repertoire. </p><p>The observation that the cerebral cortex is com- posed of a large number of local neural assem- blies that are iterated throughout its extent, by itself, is not an existence proof that complex be- havior may result from a network of simple ele- ments. Von Neumann's construction is necessary to show that such a structure is indeed capa- ble of complex behavior, without the need to in- voke region-to-region variability, long-range inter- actions, stochastic components, or mysticism. </p><p>Whether the yon Neumann metaphor is quali- tatively correct is open to question. However, even if we agree that it does capture an essential fea- ture of brain organization, there is more to be done than to turn the metaphor into an allegory: several challenging areas of theoretical inquiry remain. </p><p>0167-2789/90/$ 03.50 (~) 1990 - Elsevier Science Publishers B.V. (North-Holland) </p></li><li><p>206 J.D. Victor / Automaton theory and brain </p><p>2. The problem of robustness </p><p>One major qualitative difference between the behavior of the von Neumann construction and that of the brain is that of robustness, or sta- bility to perturbation. Alteration of the state of a single unit of the von Neumann machine typi- cally leads to catastrophic failure; malfunction of a single neuron or neural assembly should have no measurable effect. To some extent, the instability of the cellular automaton model may be a conse- quence of the discretization of states - but how many states are needed to provide for robustness? </p><p>Closely related is the problem of robustness of behavior on an imperfect lattice. The cellular au- tomaton metaphor becomes very inattractive if it would require an ezactly periodic brain. Presum- ably, neural connections are formed on the ba- sis of a regular overall plan, but with variability at the level of the individual neuron. If the four neighbors of a von Neumann unit were selected at chance from a somewhat larger local neighbor- hood, the construction would fail catastrophically. Again, this failure is probably a consequence of the small number of neighbors in the von Neumann machine - but how many neighbors are necessary? </p><p>Yet another kind of robustness is that of rela- tive insensitivity to global alterations of the tran- sition rule itself. In this regard, the brain does not appear to be robust. Rather, special homeo- static mechanisms such as autoregulation of cere- bral blood flow act to minimize changes in the metabolic milieu that might induce global changes in neural function. When the homeostatic mech- anisms fail in even apparently subtle ways, the result is a gross disturbance of consciousness. It is plausible that the cost of ensuring some measure of insensitivity to mild changes in the transition rule itself is much higher than that of ensuring in- sensitivity to state change or connectivity - but can this notion be made precise? </p><p>Introduction of more states and more neighbors leads to other questions. What is the tradeoff be- tween number of states and number of neighbors in providing a given level of reliability? A many- neighbor (100 or more) many-state (20 or more) automaton is biologically implausible unless there is some regularity to the state space and the tran- sition rule. An irregular rule is also very likely to </p><p>be unstable to perturbation. For a plausible rule, it should be possible to view the states as discrete points in a continuous state space, and the tran- sition rule should be at least piecewise continuous on state space. But what can be said about the topology and dimension of that space, and how complex must the transition rule be? </p><p>These abstract questions can be made quite concrete if, say, we consider a von Neumann unit to be a neuron. Does a single variable (such as a transmembrane voltage) suffice to specify a neu- rons state, or must additional variables (perhaps concentration of calcium or a trophic factor) be considered? Such factors are crucial for develop- ment and synaptic plasticity. But are they re- quired for the moment-by-moment information- processing as well? Questions about the transi- tion rule become questions about the information- processing capacity of a single neuron. Does it suf- fice to lump all the neighbors inputs into a few pools which combine additively (e.g. an excita- tory pool and an inhibitory pool), or is it nec- essary to hypothesize more complex interactions among a neurons inputs? Such interactions, such as presynaptic inhibition, are well known - but are they a requirement, an efficiency, or an inessential byproduct of evolution from a more primitive ner- vous system? </p><p>3. The problem of scaling </p><p>To understand the behavior of a model neural network, it often appears necessary to create an explicit computer model. Indeed, if the behavior of the model could be predicted in a simple fashion from its axioms, the model would likely be criti- cized as not possessing the emergent properties that are the essence of an acceptable model for cortex. But even the most ambitious explicit com- puter simulations are dwarfed by the number of elements in a real cortex. The scale of a computer model (lo4 to lo5 elements) is about halfway be- tween that of just a single unit and that of a real brain (lo8 to 10 elements). We need to know how characteristic lengths and times scale as the num- ber of network elements increase from that of a computer model to that of a real brain. A net- work whose settling time increases even linearly </p></li><li><p>J.D. Victor / Automaton theory and brain 207 </p><p>with network size will have very different behav- ior when scaled up by four or five orders of mag- nitude. </p><p>More generally, we need to be able to under- stand what happens to global dynamics (number, dimension, and stability of attractors, for exam- ple), and whether additional qualitative proper- ties of the model will emerge at a more realistic scale. A thorough understanding of scaling behav- ior may permit a clearer answer to the question of whether the qualitatively distinct features of human brain function may simply be viewed as consequences of its size, or rather, whether other processes (such as the development of new, spe- cialized brain regions) must be invoked. </p><p>4. The problem of model testing </p><p>Let us now assume that we have a model in hand, with acceptable robustness, a biologically reasonable transition rule, and scaling behavior within grasp. How, and to what extent, can the model be tested? </p><p>Model testing requires two levels of investiga- tion. (i) Does the overall behavior of the model correspond to that of the brain? (ii) Is there a de- tailed correspondence between parts of the model and parts of the brain? These questions seem in- extricably related: it only makes sense to inquire about a detailed correspondence if overall behav- ior is acceptable, but how do we ask about overall behavior if we do not know what are the model counterparts of biologic observables? </p><p>Theory can help in two ways. Firstly, theory may be able to identify certain kinds of observ- ables that are relatively insensitive to hypothe- ses about detailed correspondence. Possibilities for such observables might include dimensionality and stability of limit trajectories, or, how mutual in- formation at two points in the model scales with separation or time lag. Such ideas might suggest new ways to interpret anatomic, single-cell, and gross-potential data. </p><p>Secondly, we would like to know to what ex- tent two models which have different internal de- scriptions may have similar overall behavior. This is perhaps the most important contribution that theory can make. A relatively minor benefit is that competing models that can only be distinguished on the basis of detailed biologic correspondence will be recognized as such. The major benefit is that such an understanding will identify the crit- ical features of internal structure which do affect overall behavior - and thus, define the critical ex- perimental questions. </p><p>5. Conclusion </p><p>I hope that the questions raised above will serve as a focus for theoretical efforts, and as a frame- work for the design and interpretation of experi- ments. The viewpoint implicit in these questions is likely to be considered by experimentalists to be an apology for automaton theory, and by theo- rists, to be one of dissatisfaction with the state of the art. It is neither. My view is that the questions raised above are difficult but not unanswerable. Progress will be made, but progress will require new theoretical insights and mathematical analy- sis, and not merely brute-force computer simula- tions. Von Neumanns work has had a profound impact on neuroscience; if we can continue in his tradition, there will be further rewards. </p><p>Acknowledgements </p><p>This acknowledges the support, in part, of the McKnight Foundation and NIH grant EY7977. </p><p>References </p><p>[1] J. van Neumann, in: Theory of Self-Reproducing Au- </p><p>tomata, ed. A.W. Burks (University of Illinois, Urbana, </p><p>IL, 1966). </p></li></ul>


View more >