creativity in nature and in the mind: novelty in biology and in the biologist's brain

18
Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain Author(s): Henri Atlan Source: SubStance, Vol. 19, No. 2/3, Issue 62/63: Special Issue: Thought and Novation (1990), pp. 55-71 Published by: University of Wisconsin Press Stable URL: http://www.jstor.org/stable/3684668 . Accessed: 05/05/2014 15:44 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . University of Wisconsin Press is collaborating with JSTOR to digitize, preserve and extend access to SubStance. http://www.jstor.org This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PM All use subject to JSTOR Terms and Conditions

Upload: henri

Post on 27-Jan-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's BrainAuthor(s): Henri AtlanSource: SubStance, Vol. 19, No. 2/3, Issue 62/63: Special Issue: Thought and Novation (1990),pp. 55-71Published by: University of Wisconsin PressStable URL: http://www.jstor.org/stable/3684668 .

Accessed: 05/05/2014 15:44

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

University of Wisconsin Press is collaborating with JSTOR to digitize, preserve and extend access toSubStance.

http://www.jstor.org

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 2: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Henri Atlan

DARWINIAN AND NEO-DARWINIAN theories of biology are relatively recent, compared to "creationism" and "fixism." Not only are they newer; they are able to account for the emergence of "newness" in living systems.

This observation underlines the fact that "biological novelty" means two different things: a new theory or discovery in biology, and also the appearance or emergence of "newness" in a living system, either at the level of the organism or of the species. Although different, these two kinds of novelty are related. If we wish to use the all-encompassing concept of "Nature," we must say that both kinds of novelty are produced by Nature. In other words, both could be described under the general term "the creativity of nature," and could be accounted for by natural laws, without resorting to the will and miraculous deeds of the Creator. In principle, the relationship between these two kinds of biological novelty should "work" in both directions.

On the one hand, "newness in a living system" depends on how life is considered by the biologist--either as a completely deterministic process where "newness" is a temporary illusion due to ignorance, or as a truly creative process (whatever that means), or as something in between, such as a system of self-organization of various types. On the other hand, the biologist is himself a living system, both as an individual and as a part of the collective effort which constructs biology. Therefore, innovation in biological science should be accounted for by the biological theory of "newness" in living systems. This is what a new trend in the philosophy of knowledge is attempting to do, by means of a neo-darwinian theory, under the name of evolutionary epistemology.

However, I believe that a third kind of newness must be considered and differentiated from "the creativity of nature"-the creativity of the human mind as it is experienced from within (in this case, the creativity of my own biologist's mind, where I experience intentionality). In theory, nothing prevents us from including this in "the creativity of nature," and from stating that intentions and free will are subjective illusions. Similarly,

SubStance N0 62/63, 1990 55

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 3: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

56 Henri Atlan

nothing prevents us from seeing as an illusion the idea that humans are active agents capable of initiating causal chains, and are not simply deter- mined by them. However, I deliberately choose not to do so, in order to maintain the reality of social and ethical constructs based on the postulate of the responsibility of intentional individuals.

A different way of saying the same thing is to consider psychology and sociology as autonomous sciences, relative to biology, and to consider biology as an autonomous science relative to chemistry and physics. Al- though biology does not contradict chemistry and physics, it cannot be reduced without residue (in Nagel's sense) to those sciences, because it deals with functions in teleonomic (non-purposeful) physico-chemical sys- tems. Similarly, although the sciences of the human mind do not contradict biology, they cannot be reduced to it, because they deal with purposeful, intentional, biological systems.

This does not mean that one has to assume the reality of free will, and that one cannot assume causal determinations for the intentions themsel- ves. But purposeful intentionality is recognized as a particular kind of efficient causality, and as such, is a specific object of the sciences of man. The question of free will must be left aside until (if ever) we know in detail how specific intentions are causally produced.

The only reason why I might make the opposite choice and treat intentionality as an illusion would be an a priori belief in the unity of science, based on reductionism of one kind or another.

If one is willing to distinguish between these three kinds of novelty- two of them being classically attributed to creativity of nature, and the third to creativity of the mind-then what happens in the biologist's brain (my brain) is a non-reducible superimposition of two natures and one mind-the nature of the brain as a biological system, the nature of the biologist as a part of the social system which makes biology, and the mind of the man that I happen to be.

Programs and Self-Organization

Until the first half of this century, the creativity of nature and especial- ly the creativity of life was a source of poetic inspiration or of speculation for mystic traditions. As such, it had some influence on vitalistic theories in biology, but vitalism was losing most of its battle against mechanicism, receiving its final blow from molecular biology in the 1960s.

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 4: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 57

This, however, was not the end of the story. On the contrary, the science of modern biology, non-equilibrium, thermodynamics and, later, the computer, led to the questioning of the notion of complexity, and to its emergence as an object of science. Since living organisms are made up of the same matter and obey the same laws as other physico-chemical sys- tems, their specificity must be sought in the principles of organization, which would account for their complexity and for their regulatory and adaptive properties. At the time that the computer sciences were beginning their rapid expansion, people like Von Neumann, Von Foerster and others were looking for principles of organization which could be learned and transposed from natural living systems into man-made, complex adaptive systems. This was the beginning of what is known today as "the sciences of complexity."

One of the first realizations that complexity itself could and should be an object of science, arose from the biology of the brain as well as from developmental biology. Shortly after the initial enthusiasm following the discoveries of DNA structure and the mechanisms of gene replication and expression, it became clear that this was hardly the end of the story. It was not enough to uncover the structure of the parts--molecules and cells-in order to understand the structure and functioning of the whole.

For some time, as a principle of organization, the notion of a computer program (in the form set forth by Von Neumann), of a sequence of logical computing instructions, has dominated the scene. However, it was soon realized that this notion of a sequential deterministic computer program, transposed to DNA sequences, was more a useful metaphor for talking about genetic determinations than a reality.

That is why principles of natural organization which could supplant the anthropomorphic aspects of the computer program were sought. Prac- tically the same idea came from different fields-from information theory, thermodynamics, chemical kinetics and, more recently, from the physics of disordered systems, and artificial intelligence. This common idea is that randomness is not just a negative force which contributes nothing to the organization of nature, and can only destroy it, but that within constraints, a certain amount of randomness (or "noise" as it is called in information theory, or "random fluctuations" in dynamics) is necessary for self-or- ganization to take place, giving an increase in complexity and the emer- gence of unpredicted structures and functions.

We formalized this idea in an abstract way in the 1960s, to show that it was not absurd, and gave it a name, first "order from noise," by Heinz Van Foerster, and then my own "complexity from noise." Today it is widely

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 5: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

58 Henri Atlan

accepted, since, as I have said, it also came from other disciplines-mainly from the application of automata network computation to physics and artificial intelligence.

Self-organizing Networks and the Emergence of Function

Since A. Turing did his pioneering work on morphogenesis by the coupling of diffusion and chemical reactions, many models of cooperative phenomena (making use either of sets of continuous differential equations or of discrete networks of cellular automata) have repeatedly produced the same kind of phenomenon. Starting from a homogenous initial state (which may be set by a random distribution of states over the elements of the networks), the system evolves "spontaneously" towards a stable state where a macroscopic structure in space and/or time can be recognized. This evolution is the result of local laws of interaction between the coupled phenomena or the connected automata. This phenomenon is very general, and is observed in a wide range of local laws of interaction (continuous, boolean, threshold functions, etc.) and of connection.

The microscopic structure can be set up randomly (at least partially), as in a random boolean network, where different boolean laws are dis- tributed randomly on the elements of the network. Even in relatively small networks of a few coupled processes-as are studied in immunology and cell biology-and all the more so in larger networks, whose aim is simulat-

ing the cognitive capacities of the brain, it is very difficult (usually impos- sible) to predict the emerging macroscopic structure from an inspection of the microscopic structure and initial state, without actually running the

computer simulation. These models show how a mixture of local determination and ran-

domness can easily produce apparently "self-organizing" structures which were not explicitly programmed. Even systems that are completely deter- ministic at the level of local interactions can produce this kind of

phenomenon, if their dynamics are rich enough so that their integration leads to several different stable solutions, and if they are submitted to random fluctuations which drive them towards one or another of these solutions. The extreme case is that of so-called "deterministic chaos" or

"strange attractors," where the number of possible solutions is as large as the number of possible initial states (i.e. infinite, for continuous variables), so that the smallest possible fluctuation may drive the system to a com-

pletely different, unpredictable solution.

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 6: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 59

Self-organization has become popular in artificial intelligence, with the use of neuron-like automata networks to perform tasks of pattern recognition by non-directed learning and distributed associative memory ("neural net computation"). This method, which makes use of several com- peting algorithms, is a set of more extended, more sophisticated versions of the perceptrons developed in the 1960s. For this reason, this approach has been termed neo-connectionism, and has become well-known in cognitive sciences under that name.

However, even at this level of computer simulations of physico-chemi- cal processes (where no intentional, conscious or unconscious definition of the self is assumed), it is appropriate to make a clear distinction between two kinds of self-organization. I suggest that "strong" self-organizing sys- tems be distinguished from "weak" ones.

Examples of "weak" self-organization systems can be seen in most artificial intelligence applications of "neural network computation" to the design of learning machines and associative distributed memory. A typical performance of such networks consists of recognizing an infinite number of variations of a given pattern (such as a handwritten letter of an alphabet) after exposure to a limited, relatively small sample of such variations. Contrary to classical pattern-recognition procedures, the class of patterns to be recognized is not explicitly defined by specific features, and program instructions are not given to search for such features. Rather, very general learning rules are set up, so that exposure to samples of the class members will trigger a modification of the network's connection structure. Then, further exposure to different pattern members of the same class will lead to recognition with a reasonably high rate of success. It is in this sense that the network organizes itself, since the learning rules are not specific for recog- nition of any particular pattern. They are the same for all possible classes of patterns-to be defined by the "experience" (the exposure itself) during the learning period. Specific connections are automatically established, but in a non-explicitly programmed manner, by applying the rules while exposing the network to external stimuli (the samples of patterns which, implicitly, define the class). This "self-organization" of the connection structure is real in the sense that it is not explicitly programmed. However, it is "weak" self-organization, because the task to be achieved is defined (implicitly) from the outside--the goal in this case is to recognize a given pattern, and more generally, to solve a given problem, set up from the beginning by the designer of the network. The learning rules are considered efficient (or inefficient) according to a previously-established criterion (the extent to which the specific problems are solved and the proposed tasks performed).

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 7: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

60 Henri Atlan

In other words, the meaning of the functioning of the network is still given, a priori, by the human designer.

"Strong" self-organization means that even the goal (the meaning of the structure and function of the machine) emerges from the evolution of the machine itself. The work of our group in Jerusalem, in collaboration with G. Weisbuch and F. Fogelman in Paris, on random boolean networks can give a primitive but illustrative example of this phenomenon.

Figure 1 shows such a network, after it has reached its attractor, in the form of a macroscopically structured state. Its initial state was macroscopi- cally homogenous, being a random distribution of O's and l's on the ele- ments. One can see that it is now divided into subnets, characterized by their different temporal behavior. The elements in the network indicated as "S" are stable; i.e. their state, either 0 or 1, does not change in time. Those

SSSPPPPPSSSSSSSS SSSPPPPPISSSSSSSS s s sSP P P P PP P PP S S SSSSSPPPPPPPPP S SSSSSPPPSSSSSPP SSSSPPPPSSSSS PP SS PPPPPPSPPSSSS SP PPP PPPS PP SSSS SPPPPPP S PPPP SSS PPPPPPPPS PP SSSS PPPPPPPSSSSSSSPP PPPPPPP SSSSSSSSP

PPPP SSSSSSPPiPP PPSSSSSSSSPPSSPP PSSSSSSSSSPPSS PP PPS S SI SSjPPi sPSP

Figure la. Emergence of macroscopic structures. The elements are connected in such a way that each of them receives two binary

inputs from two of its nearest neighbors and sends the same output to the two others after computation. Each row and column is closed on itself. At every step of the computation, every element computes its output from its two inputs by means of one of the 16 possible 2-variable binary functions. The functions are randomly distributed on the elements of the network and do not change during the computation. The figure represents the states of the elements when the network has reached one of its attrac- tors: S stands for stable, P for periodically oscillating with a relatively short cycle length.

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 8: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 61

2220000000000222 ->22220000000000222

2220000000000222 2222200000000000 2222200000000000 2222000000000002 2200000000000022 0000000000000000 0000000000000000 0000000000000000 000000000000000 00000000000000001 00 0000000000000011 0000000000000011 1000000000000011 0000000000000002

Figure lb. Emergence of function. The arrow indicates an element which serves as an input device for a recognition

channel. After the network has reached its attractor (Fig. la), binary sequences are imposed on the input element and perturb the state of the network. The state of the elements indicated as "0" is the same (either S or P) as in the unperturbed attractor. The elements indicated as "2" were stable in the unperturbed network and are des- tabilized by the sequences imposed on the input element. The elements indicated "1" were oscillating in the unperturbed network and are stabilized by a class of sequences imposed on the input element; they serve as output for the recognition channel. The sequences which stabilize them are said to be "recognized."

indicated as "P" oscillate periodically; i.e. their state changes in time ac- cording to a given, relatively short sequence of 0 and 1, cycling indefinitely on itself. This is an example of a well-documented phenomenon of macro- scopic spatio-temporal structures emerging from microscopic laws (in this case, boolean functions) randomly distributed, and of certain kinds of con- nections. However, we have also found that this kind of network, after it has organized itself into such a spatio-temporal structure, has some strik- ing properties of pattern recognition. If binary sequences are imposed from the outside on one of the elements, in order to perturb the network after it has reached its attractor, the perturbation is transmitted to the rest of the network in two possible ways. While some stable units (S) are destabilized, others which oscillated periodically in the unperturbed network become, surprisingly, stabilized by the perturbing sequence. The mechanism of stabilization is a complex resonance between the incoming perturbing se-

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 9: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

62 Henri Atlan

quence and the periodic cycle of some of the oscillating elements. How- ever, this resonance is not restricted to one single periodic incoming se- quence. It is also observed for partly random sequences, the periodic structure of which is limited to a few bits, while the others are indifferent. Thus the whole class of sequences (potentially infinite if their length is not limited), which share this minimal periodic structure, will have the proper- ty of stabilizing a given element of the network.

When this happens, i.e, when a sequence imposed on a given (input) element has the property of stabilizing another (output) element, we say that the network functions as a pattern-recognition device and that this sequence has been "recognized." Thus, for a given network in its steady state and a couple of elements functioning as input-output of the pattern recognition device, a class of sequences are recognized, and those sequen- ces which do not belong to that class are not recognized.

An example of sequences belonging to such a class is given in Figure 2. The class is defined by the displayed, partly random sequence where the asterisks stand for 0 or 1 indifferently, at random.

Figure 2: **0***0***0***0***0***0***0*

The important point in this phenomenon is that the definition of the recognition class (the criterion for recognition) has not been programmed, and is the result of the overall spatio-temporal structure of the network in its attractor, and more particularly, of the states of the input-output ele- ments, together with the elements which join them and their neighbors.

As a matter of fact, when the input and output elements for a par- ticular recognition channel are far apart in the network, it is very difficult to figure out the details of the mechanism whereby the output element is stabilized when a member of a given class of sequences is imposed on the input. It is not enough that the criterion for recognition has not been programmed. It can be discovered only a posteriori, by experiments, as in a true natural recognition device. For instance, one could think of a region in the brain of an animal which responds to some stimuli and not to others when it is explored by a pair of electrodes, one stimulating and the other recording the response. The task of the neurophysiologist is to discover the criterion for this discrimination, and its mechanism.

Thus, this recognition property is an emerging one, which accompanies the emergence of the macroscopic structure, as a function accompanies a structure.

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 10: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 63

This kind of phenomenon exemplifies "strong" self-organization, since it resembles a natural self-organizing system (e.g. a biological one), where the goal has not been set from the outside. What is self-organizing is the function itself, with its meaning; it is not simply a structure adapted to achieve a predetermined goal. In this example, where the function is pat- tern-recognition, the criterion for recognition is the goal and the origin of meaning for the recognizing system. According to this criterion, the demar- cation between "recognized" and "unrecognized" sequences transforms the whole undifferentiated set of possible binary sequences into those which are "meaningful" and those which are not, or, into those which have one meaning and those which have another. Since the criterion for this demarcation is itself the result of self-organization, one can say in a sense that the origin of meaning in the organization of the system is also an emergent, self-organizing property.

In order to describe all these emerging properties (structural and func- tional) one must take into account the viewpoint of the observer (obviously not in a subjective sense, but in the sense usual in physics-the objective conditions of observation and measurement). The notions of microscopic and macroscopic structures are relative to the level of observation. Similar- ly, consideration of the network as a meaningful pattern-recognition device is an interpretation. More precisely, it is a projection of our own experiences of pattern-recognition--either directly, or by means of our machines--on the observed behavior of the network. Therefore, even in the case of "strong" self-organizing systems (assumed to simulate "machine- like" natural systems, such as animals), we are left with the ultimate ques- tion of the goal of observation and interpretation. This will lead us to a second distinction, between intentional self-organizing systems (human), and non-intentional ones.

Before pursuing this question, I would like to offer a few comments on another aspect of uncertainty and complexity--the "underdetermination of the theories by the facts."

Underdetermination of the Theories by the Facts

"Underdetermination of the theories" is a phenomenon previously described by some philosophers who dealt with problems of cognition and linguistics. Today, we come across this problem in apparently much simpler situations. By "underdetermination of the theories by the facts" I mean the following: many different, non-redundant theories predict the

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 11: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

64 Henri Atlan

same facts, and there is no way one can decide among them, based on observable facts alone.

This "underdetermination" was thought to be a kind of borderline phenomenon, expected to be met only at the periphery of real science, or in sciences in their early stages. However, with automata network computa- tion as a new method of modelling sets of interacting units working in a

coupled fashion, this phenomenon appears to be much less exotic than at first. As a matter of fact, it may turn out to be more the rule than the

exception, since we are already facing just such an underdetermination in the case of small numbers (less than ten) of interacting units. The quan- tification of this phenomenon may be seen as a measure of our difficulty in

understanding a system by means of an unambiguous theory. As such, it

may be the best operational estimate of the complexity of a natural system. However, the analysis of this phenomenon in terms of automata net-

work theory will also help us to understand how different complex cogni- tive systems-unpredictable in the details of their structures-can nevertheless share some features, and communicate in a meaningful way with one another.

Here too, we will find that uncertainty is, for us, an ambivalent situa- tion, being at the same time negative, in that it lacks complete predict- ability, and positive, in that it provides room for a better understanding of

complexity in nature. Let us consider any network of interconnected units, even a relatively

simple one (in the sense that it is made of a small number of such units). Such a network may be used to represent the behavior of a system made

of several elements or processes working in a coupled fashion; for example, different biochemical reactions in cell physiology, different cell popula- tions in immunology, or different neurons or groups of neurons in the nervous system. The observable facts are represented by the states of the network, and more precisely, by its stable states. Now, the stable states are determined by the structure of the network; they can be computed on the basis of the connection structure between the units. That is why every connection structure of a given network represents a theory which allows one to explain and predict the stable states of the network-i.e. the observ- able facts.

In the work of theorization, the ideal would be that one single struc- ture of the connections would predict the stable states observed in reality. Unfortunately, it is easy to see that in most cases we are unable to reach that goal and even to come close to it, except under particularly favorable

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 12: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 65

conditions. This is due to the fact that the number of possible connection structures is generally much larger than the number of possible states.

For example, for 5 interconnected units, the number of states is on the order of 25= 32 if every unit can be in either one of 2 states, and it is 35= 243, or 45 = 512, etc., if each unit can be in 3 states, 4 states, etc. However, the number of connections is 25 and the number of possible connection structures, i.e. of different networks with the same 5 units is of the order of 325 or more, if we assume that a connection can be activating, inhibiting, or non-existent. 325 is approximately 1012; ie. there are one thousand billion different ways to interconnect 5 units, or one thousand billion different theories with the same 5 elements.

Thus, we have on the one hand a few hundred states, and on the other, billions of different possible structures. As a result, a large number of net- work structures, different without being redundant, predict the same stable states. In other words, a large number of different theories predict the same observable facts. This is what is meant by "underdetermination of the theories by the facts." The theories outnumber the facts.

The above example of a 5 automata network is taken from a model in immunology, and it shows that small numbers of interacting units are al- ready enough to generate such underdetermination. It is not necessary to consider the billions of interacting neurons of a mammalian brain, nor to resort to psychology and to the usual neuro-philosophical mind-body problem to conjure up such underdetermination. Rather, studying cooperative biological systems like the immune network could shed some light on the mind-body problem, by depriving it of some of its specificity; underdetermination of theories in psychology is not due to a kind of sub- stantial or even epistemological dualism, but only to the complexity of the system under study--to the fact that it cannot be reduced completely by enough empirical data.

Now, let us remember that it is in psychology and the cognitive scien- ces that large automata network computation has become very popular in recent years, as a very crude simulation of the behavior of neural networks. (This is an extension of the classical work of McCulloch and Pitts on neural nets in the forties, made possible today thanks to the availability of com- puting facilities.) This is why most people view such networks as models of cognitive systems-not as theories which account for facts, but as the cognitive systems which build the theories, like our brains. But if we change our viewpoint and look at these networks as models of the cognitive sys- tems whereby we perceive, understand and communicate, the perspective is completely changed. What appeared as a weakness in the theories (their

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 13: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

66 Henri Atlan

underdetermination by the facts) now appears as a positive feature of our cognitive neural networks. The same phenomenon-the fact that the num- ber of different network structures is much larger than the number of their states--now has a different meaning. It is transformed into a positive at- tribute of robustness and structural stability which allows for very dif- ferent neural nets to reach identical stable states. If we are willing to recognize some validity in the "neural net computation model" of our brains (in spite of its oversimplification and underdetermination), this stability is very important for the functioning of our cognitive systems. Indeed, we have every reason to believe that both genetically- and epigenetically-determined structures in our brains lead to very different connection patterns for different individuals, at least as far as the detail of the connections is concerned. And the neural net computation model shows us that different neural networks with different histories may never- theless reach similar stable states. This may allow us to share some identi- cal stable patterns in our different cognitive systems, and thereby to more or less understand one another. Thus, what appears as an irreducible com- plexity of our cognitive systems (from the viewpoint of the underdeter- mination of our theories about their structure and function) may also be what allows intersubjectivity.

Meaning and Intentionality

In all these models of self-organization, we always take something for granted, namely, the meaning of the pattern which emerges and stabilizes, and the function it performs.

However, as we have begun to discuss above, this question of meaning is what makes all the difference between natural systems, which we ob- serve from the outside, and human systems (also man-made systems) which we can observe at least partially from the inside. In man-made systems, the meaning of a structure or a function is given by a goal which is generally set up purposefully. The goal provides the criterion for meaning- fulness. A process or a structure is meaningful insofar as it is good for doing something. In man-made artificial systems, as in any program, the end is previously decided upon by the designer of the system or by the programmer.

On the other hand, in natural, non-human systems, we do not even know whether or not there is an end; and if one exists, it is set up from inside, by the system itself. That is, the source of meaning, if there is one, is

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 14: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 67

inside the system itself and can only be guessed at by us, as external observers or users.

As a matter of fact, we have seen that we do not have to assume that there is a purpose in natural systems, even if they appear to be oriented towards some globally defined end, as are living organisms. Our simple simulations of self-organizing networks show us that what appears to us external observers as a function with a meaning (e.g. discriminating be- tween patterns) may result from a mixture of determinations and random- ness, having no purpose whatsoever.

Human systems lie somewhere in between, in a strange intermediate position. As creations of nature, they are a mixture of determinations and randomness. If there is meaning in the functioning of the mind, the body, or of society, it comes from inside, and there is no designer or programmer who can tell us with certainty what it is, as is equally true for any natural self-organizing system. However, our situation is also that of observers from the inside, and we plan things for ourselves which have a purpose, and try, often successfully, to achieve goals we have set for ourselves. From this point of view, we have some access to what makes the meaning of human life, whether at the individual or at the collective level.

One can say that all this is mere illusion, that we are driven by uncon- scious drives which are basically physicochemical forces. This strong reductionist position may be ultimately or metaphysically true; however, it is not true empirically, because we do not have the tools--neither the concepts nor the techniques--to analyze things in such terms. Reduc- tionism in its classical form, which reduces psychological and social domains to biology and physics, has been criticized many times, and there is no need for me to repeat it here. However, it is important to realize that a different kind of reductionism, working in the opposite direction, from biology to the psychology of cognition, is now becoming popular.

There is a tendency to see biology from the point of view of a psychologist or a cognitivist, i.e. to view biological processes, especially biological development and the evolution of the species, as cognitive processes, or systems which grow and evolve by the acquisition of knowledge concerning their environment. This may appear similar to Piaget's use of the concept of "assimilation and organization" to describe both biological and cognitive self-organization. However, one must be cautious when making these kinds of analogies.

One should neither interpret them literally nor utilize them as the basis for a monistic, all-encompassing view of the universe, whereby one kind of phenomena is reduced to the other. Classical reductionism reduced

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 15: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

68 Henri Atlan

psychology to biology and physics. The new reductionism, within the framework of a unified evolutionary theory based on Darwin and Lorentz, works in the opposite direction. The reference to Darwin does not prevent the use of anthropomorphisms in describing the organisms as "knowledge acquisition systems" which explore their environments. Sometimes, in order to complete the unified picture, this evolutionary process is extended back to primitive forms of life, and even to the origin of the universe and the Big Bang, with the help of the so-call anthropic principle.

It is true that on a certain general and abstract level, organisms may be described as "self-organizing cognitive networks." We have seen some examples (especially of functional self-organization) which simulate the emergence of biological functions and can be described, in a sense, as "a self-creation of meaning." However, there are two different ways of making use of these analogies. One is to interpret them literally and to fall into the trap of reductionism of one kind or another. Another way of looking at these analogies is to consider them as being merely formal; that is, not to go beyond the fact that biological processes and cognitive proces- ses may be described, to some extent, by identical mathematical and logical tools.

There is nothing new in this. The same mathematics--differential cal- culus, statistics, group theory, etc.-have always been used to clarify problems in all the sciences, from physics to chemistry, biology, sociology and psychology. What is new with the cognitive sciences are the kinds of mathematical tools which are used-automata networks, whose theory has been worked out recently, with cellular automata, boolean networks and neural nets as particular instances. The fact that neural net formalism can be used to describe the phenomena of self-organization and emergence in biological pattern formation, as well as in cognitive and social processes per se, does not imply that the latter can be reduced to the former, or vice-versa.

It is as though the fact that the same exponential function may describe (under certain conditions) chemical kinetics, cellular growth kinetics and human population kinetics, allows one to consider that these processes, whose physical substrates are different, may be reduced to one another. One could as soon reduce them to radioactive decay, which also obeys an exponential law.

It should be remembered that many important breakthroughs in the theory of neural networks came from the physics of spin glasses, a recently developed branch of magnetism. This is certainly not a reason for viewing cognitive processes as an instance of magnetism, or for viewing magnetic alloys as cognitive machines, which acquire and process knowledge.

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 16: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 69

Here again, what is missing, when the formal analogies are taken as real, is the relevance of the specific phenomena and the meaning that they acquire from their specific contexts of observation and experience.

It is easy to realize that the new reductionism-the one from biology to cognitive sciences-is none other than a neo-vitalism, in spite of the con- stant references to Darwin and molecular biology. Real cognitive processes which serve as references in the analogy are known to us from observa- tions on the psychological level. For example, without observations of humans who use combinatorial language with its syntactic and semantic properties to acquire and process knowledge, we would not even think about cognition and knowledge-acquisition when analyzing the spatio- temporal evolution of molecules, cells and organisms. When transposing these notions to different levels of organization, such as the molecular one, we should never forget that we are doing a metaphorical transposition, which may have a heuristic value-as when one speaks of "selfish" DNA or "selfish" genes-but which should not be interpreted literally. If we do so, there is nothing to prevent us from assigning intentions and conscious- ness to cells and molecules. Already one finds this ambiguity in the transposition of concepts from computer sciences to biology, the most misleading of all being the metaphor of the genetic "program."

Originally the idea of a genetic "program" written by natural selection was aimed at depriving biology of finalism and vitalism. The goal, as Pittendrigh put it (followed by Mayr, Monod and others), was to replace purposeful end-seeking processes by non-purposeful end-seeking processes. Thus, it was hoped that the specificity of biological phenomena would be maintained with regard to their physical and chemical substrates, without the need to resort to a special substance, called "Life," which until the beginning of the century was thought to be embedded, like our minds, with intentional properties and purpose.

Nevertheless, no one has ever seen a computer program written without a well-defined purpose. Therefore, when the "program" metaphor is taken literally, intentionality and purpose come back and physics and chemistry tend to be forgotten in the description of actual biological development and evolution. Then "natural selection" replaces God as ul- timate explanatory principle in the classical tautological vitalist reason- ing-what happens is what ought to happen because it has been programmed that way; otherwise it would not have succeeded in evolu- tion. Which is not very different from saying that things happen the way they happen because Nature has programmed them that way, or because such is the will of God.

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 17: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

70 Henri Atlan

Thus, if it is true that increasing complexity in biological evolution and in biological pattern formation is similar to the processes that produce pattern and order in human society, this "formal" similarity should not lead to confusion. Developmental biologist Brian Goodwin, in an article on "A cognitive view of biological process" rightly warned that to forget that "similarity is not identity" would be to "simply commit again the reduc- tionist fallacy on another level."

In other words, in spite of formal analogies, the creativity of the mind is different from the creativity of nature, because we experience it through different means of observation and understanding. To say that our mind is itself a part of nature is of no help, because we experience it from inside, being oriented by our purposes, whereas we observe the rest of nature from the outside, with no evidence of purpose. Therefore, novelty in the biologist's brain must not be identified with novelty in biological models (even if some of these models account for the appearance of novelty). Novelty in the biologist's brain, as in any human brain, leaves room for purposeful creation of meaning, whereas biological models always deal with apparently non-purposeful organization.

If we want to live in society, where our purposeful decisions are taken seriously and we are considered morally and legally responsible for them, we must clearly distinguish between two kinds of self-organizing systems in nature. The first are the non-human, non-intentional ones, where a mixture of determinations and randomness results in the emergence of structure and functions at a different level. However, these emergences can be observed only from outside, and have meaning only for us, the ob- servers---even when an "ideal objective observer" can be defined, based on a consensus about the conditions of observation and measurement.

The second type of self-organizing system is different. By projecting our own intentionality, we accept the idea that what we observe in other human beings can also be the result (at least partly) of intentionality. Although others appear to us as "emerging structures and function," and in principle could be attributed to a mixture of determinations and ran- domness, we consider them as being able to truly generate the meaning of their behavior. We do not (because we cannot) use scientific objective reasoning to justify this distinction between intentional and non-intentional self-organizing systems. However, this is a prejudice that ultimately we accept, for practical, ethical and social reasons.

Universite' de Paris VI Hadassah University Hospital, Jerusalem

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions

Page 18: Creativity in Nature and in the Mind: Novelty in Biology and in the Biologist's Brain

Creativity in Nature and in the Mind 71

BIBLIOGRAPHY

H. Atlan, Entre le Cristal et la fumde, Seuil, Paris, 1979 H. Atlan et al, "Emergence of Classification Procedures in Automata Networks as

a Model for Functional Self-Organization," J. Theor. Biol. 120, 371-380, 1986. H. Atlan, "Self-Creation of Meaning," Physica Scripta, 36, 563-576,1987. H. Atlan, "Automata Network Theories in Immunology: Their Utility and their

Underdetermination," Bull. Mathem. Biol., 51:(2) 247-253,1989. H. Atlan and I.R. Cohen, eds., Theories of the Immune Networks. (Heidelberg:

Springer-Verlag, 1989)

This content downloaded from 142.134.145.190 on Mon, 5 May 2014 15:44:50 PMAll use subject to JSTOR Terms and Conditions