precis of "the creative mind: myths and mechanisms"

Upload: blah234m

Post on 11-Oct-2015

13 views

Category:

Documents


0 download

DESCRIPTION

Precis of "The creative mind: Myths and mechanisms"

TRANSCRIPT

--===Precis of "The creative mind: Myths and mechanisms"===-- Below is the unedited preprint (not a quotable final draft) of: Boden, Margaret A. (1994). Precis of The creative mind: Myths and mechanisms. *Behavioral andBrain Sciences* 17 (3): 519-570. The final published draft of the target article, commentaries and Author's Response arecurrently available only in paper. -----------------------------------------------------------------------------------------------__ *For information on becoming a commentator on this or other BBS target articles, write to:[email protected] [[email protected]] For information about subscribing or purchasing offprints of the published version, withcommentaries and author's response, write to: [email protected][[email protected]] (North America) or [email protected][[email protected]] (All other countries). *__ ------------------------------------------------------------------------------------------------===Precis of "THE CREATIVE MIND: MYTHS AND MECHANISMS" London: Weidenfeld &Nicolson 1990(Expanded edn., London: Abacus, 1991.) ===- --======--Margaret A. Boden School of Cognitive and Computing Sciences University of Sussex England FAX: 0273-671320 [email protected] -==Keywords==- *creativity, intuition, discovery, association, induction, representation, unpredictability,artificial intelligence, computer music, story-writing, computer art, Turing test *-==Abstract==- What is creativity? One new idea may be creative, while another is merely new: what's thedifference? And how is creativity possible? -- These questions about human creativity can beanswered, at least in outline, using computational concepts. There are two broad types of creativity, improbabilist and impossibilist. Improbabilistcreativity involves (positively valued) novel combinations of familiar ideas. A deeper typeinvolves METCS: the mapping, exploration, and transformation of conceptual spaces. It isimpossibilist, in that ideas may be generated which -- with respect to the particularconceptual space concerned -- could not have been generated before. (They are made possible bysome transformation of the space.) The more clearly conceptual spaces can be defined, thebetter we can identify creative ideas. Defining conceptual spaces is done by musicologists,literary critics, and historians of art and science. Humanist studies, rich in intuitivesubtleties, can be complemented by the comparative rigour of a computational approach. Computational modelling can help to define a space, and to show how it may be mapped,explored, and transformed. Impossibilist creativity can be thought of in "classical" AI-terms,whereas connectionism illuminates improbabilist creativity. Most AI-models of creativity canonly explore spaces, not transform them, because they have no self-reflexive maps enablingthem to change their own rules. A few, however, can do so. A scientific understanding of creativity does not destroy our wonder at it, nor make creativeideas predictable. Demystification does not imply dehumanization. -----------------------------------------------------------------------------------------------Chapter 1: The Mystery of Creativity Creativity surrounds us on all sides: from composers to chemists, cartoonists tochoreographers. But creativity is a puzzle, a paradox, some say a mystery. Inventors,scientists, and artists rarely know how their original ideas arise. They mention intuition,but cannot say how it works. Most psychologists cannot tell us much about it, either. What'smore, many people assume that there will never be a scientific theory of creativity -- for howcould science possibly explain fundamental novelties? As if all this were not daunting enough,the apparent unpredictability of creativity seems (to many people) to outlaw any systematicexplanation, whether scientific or historical. Why does creativity seem so mysterious? Artists and scientists typically have their creativeideas unexpectedly, with little if any conscious awareness of how they arose. But the sameapplies to much of our vision, language, and common-sense reasoning. Psychology includes manytheories about unconscious processes. Creativity is mysterious for another reason: the veryconcept is seemingly paradoxical. If we take seriously the dictionary-definition of creation, "to bring into being or form outof nothing", creativity seems to be not only beyond any scientific understanding, but evenimpossible. It is hardly surprising, then, that some people have "explained" it in terms ofdivine inspiration, and many others in terms of some romantic intuition, or insight. From thepsychologist's point of view, however, "intuition" is the name not of an answer, but of aquestion. How does intuition work? In this book, I argue that these matters can be better understood, and some of these questionsanswered, with the help of computational concepts. This claim in itself may strike some readers as absurd, since computers are usually assumed tohave nothing to do with creativity. Ada Lovelace is often quoted in this regard: "TheAnalytical Engine has no pretensions whatever to originate anything. It can do [only] whateverwe know how to order it to perform." If this is taken to mean that a computer can do only whatits program enables it to do, it is of course correct. But it does not follow that there canbe no interesting relations between creativity and computers. We must distinguish four different questions, which are often confused with each other. I callthem Lovelace questions, and state them as follows: (1) Can computational concepts help us to understand human creativity? (2) Could a computer, now or in the future, appear to be creative? (3) Could a computer, now or in the future, appear to recognize creativity? (4) Could a computer, however impressive its performance, really be creative? The first three of these are empirical, scientific, questions. In Chapters 3-10, I argue thatthe answer to each of them is "Yes". (The first Lovelace question is discussed in each ofthose chapters; in chapters 7-8, the second and third are considered also.) The fourth Lovelace question is not a scientific enquiry, but a philosophical one. (Moreaccurately, it is a mix of three complex, and highly controversial, philosophical problems.) Idiscuss it in Chapter 11. However, one may answer "Yes" to the first three Lovelace questionswithout necessarily doing so for the fourth. Consequently, the fourth Lovelace question isignored in the main body of the book, which is concerned rather with the first three Lovelacequestions. Chapter 2: The Story so Far This chapter draws on some of the previous literature on creativity. But it is not a survey.Its aim is to introduce the main psychological questions, and some of the historical examples,addressed in detail later in the book. The main writers mentioned are Poincare (1982),Hadamard (1954) Koestler (1975), and Perkins (1981). Among the points of interest in Poincare's work are his views on associative memory. Hedescribed our ideas as "something like the hooked atoms of Epicurus," flashing in everydirection like "a swarm of gnats, or the molecules of gas in the kinematic theory of gases".He was well aware that how the relevant ideas are aroused, and how they are joined together,are questions which he could not answer in detail. Another interesting aspect of Poincare'sapproach is his distinction between four "phases" of creativity, some conscious someunconscious. These four phases were later named (by Hadamard) as preparation, incubation, inspiration andverification (evaluation). Hadamard, besides taking up Poincare's fourfold distinction, spokeof finding problem-solutions "quite different" from any he had previously tried. If (asPoincare had claimed) the gnat-like ideas were only "those from which we might reasonablyexpect the desired solution", then how could such a thing happen? Perkins has studied the four phases, and criticizes some of the assumptions made by Poincareand Hadamard. In addition, he criticizes the romantic notion that creativity is due to somespecial gift. Instead, he argues that "insight" involves everyday psychological capacities,such as noticing and remembering. (The "everyday" nature of creativity is discussed in Chapter10.) Koestler's view that creativity involves "the bisociation of matrices" comes closest to my ownapproach. However, his notion is very vague. The body of my book is devoted to giving a moreprecise account of the structure of "matrices" (of various kinds), and of just how they can be"bisociated" so as to result in a novel idea -- sometimes (as in Hadamard's experience) onequite different from previous ideas. (Matrices appear in my terminology as conceptual spaces,and different forms of bisociation as association, analogy, exploration, or transformation.) Among the examples introduced here are Kekule's discovery of the cyclical structure of thebenzene molecule, Kepler's (and Copernicus') thoughts on elliptical orbits, and Coleridge'spoetic imagery in Kubla Khan. Others mentioned in passing include Coleridge's announcedintention to write a poem about an ancient mariner, Bach's harmonically systematic set ofpreludes and fugues, the jazz-musician's skill in improvising a melody to fit a chordsequence, and our everyday ability to recognize that two different apples fall into the sameclass. All these examples, and many others, are mentioned in later chapters. Chapter 3: Thinking the Impossible Given the seeming paradoxicality of the concept of creativity (noted in Chapter 1), we need todefine it carefully before going further. This is not straightforward (over 60 definitionsappear in the psychological literature (Taylor, 1988)). Part of the reason for this is thatcreativity is not a natural kind, such that a single scientific theory could explain everycase. We need to distinguish "improbabilist" and "impossibilist" creativity, and also"psychological" and "historical" creativity. People of a scientific cast of mind, anxious to avoid romanticism and obscurantism, generallydefine creativity in terms of novel combinations of familiar ideas. Accordingly, the surprisecaused by a creative idea is said to be due to the improbability of the combination. Manypsychometric tests designed to measure creativity work on this principle. The novel combinations must be valuable in some way, because to call an idea creative is tosay that it is not only new, but interesting. However, combination-theorists often omit valuefrom their definition of creativity (although psychometricians may make implicitvalue-judgements when scoring the novel combinations produced by their experimental subjects).A psychological explanation of creativity focusses primarily on how creative ideas aregenerated, and only secondarily on how they are recognized as being valuable. As for whatcounts as valuable, and why, these are not purely psychological questions. They also involvehistory, sociology, and philosophy, because value-judgments are largely culture-relative(Brannigan, 1981; Schaffer, in press.) Even so, positive evaluation should be explicitlymentioned in definitions of creativity. Combination-theorists may think they are not only defining creativity, but explaining it, too.However, they typically fail to explain how it was possible for the novel combination to comeabout. They take it for granted, for instance, that we can associate similar ideas andrecognize more distant analogies, without asking just how such feats are possible. Apsychological theory of creativity needs to explain how associative and analogical thinkingworks (matters discussed in Chapters 6 and 7, respectively). These two cavils aside, what is wrong with the combination-theory? Many ideas which we regardas creative are indeed based on unusual combinations. For instance, the appeal ofHeath-Robinson machines lies in the unexpected uses of everyday objects; and poets oftendelight us by juxtaposing seemingly unrelated concepts. For creative ideas such as these, acombination-theory, supplemented by psychological explanations of association and analogy,might suffice. Many creative ideas, however, are surprising in a deeper way. They concern novel ideas thatnot only did not happen before, but which -- we intuitively feel -- could not have happenedbefore. Before considering just what this "could not" means, we must distinguish two further senses ofcreativity. One is psychological, or personal: I call it P-creativity. The other ishistorical: H-creativity. The distinction between P-creativity and H-creativity is independentof the improbabilist/impossibilist distinction made above: all four combinations occur.However, I use the P/H distinction primarily to compare cases of impossibilist creativity. Applied to impossibilist examples, a valuable idea is P-creative if the person in whose mindit arises could not (in the relevant sense of "could not") have had it before. It does notmatter how many times other people have already had the same idea. By contrast, a valuableidea is H-creative if it is P-creative and no-one else, in all human history, has ever had itbefore. H-creativity is something about which we are often mistaken. Historians of science and art areconstantly discovering cases where other people have had an idea popularly attributed to somenational or international hero. Even assuming that the idea was valued at the time by theindividual concerned, and by some relevant social group, our knowledge of it is largelyaccidental. Whether an idea survives, and whether historians at a given point in time happento have evidence of it, depend on a wide variety of unrelated factors. These include flood,fashion, rivalries, illness, trade-patterns, and wars. It follows that there can be no systematic explanation of H-creativity, no theory thatexplains all and only H-creative ideas. For sure, there can be no psychological explanation ofthis historical category. But all H-creative ideas, by definition, are P-creative too. So apsychological explanation of P-creativity would include H-creative ideas as well. What does it mean to say that an idea "could not" have arisen before? Unless we know that, wecannot make sense of P-creativity (or H-creativity either), for we cannot distinguish radicalnovelties from mere "first-time" newness. An example of a novelty that clearly could have happened before is a newly-generated sentence,such as "The deckchairs are on the top of the mountain, three miles from the artificialflowers". I have never thought of that sentence before, and probably no-one else has, either.Chomsky remarked on this capacity of language-speakers to generate first-time noveltiesendlessly, and called language "creative" accordingly. But the word "creative" was ill-chosen.Novel though the sentence about deckchairs is, there is a clear sense in which it could haveoccurred before. For it can be generated by any competent speaker of English, following thesame rules that can generate other English sentences. To come up with a new sentence, ingeneral, is not to do something P-creative. The "coulds" in the previous paragraph are computational "coulds". In other words, theyconcern the set of structures (in this case, English sentences) described and/or produced byone and the same set of generative rules (in this case, English grammar). There are many sortsof generative system: English grammar is like a mathematical equation, a rhyming-schema forsonnets, the rules of chess or tonal harmony, or a computer program. Each of these can(timelessly) describe a certain set of possible structures. And each might be used, at onetime or another, in actually producing those structures. Sometimes, we want to know whether a particular structure could, in principle, be described bya specific schema, or set of abstract rules. -- Is "49" a square number? Is 3,591,471 a prime?Is this a sonnet, and is that a sonata? Is that painting in the Impressionist style? Couldthat geometrical theorem be proved by Euclid's methods? Is that word-string a sentence? Is abenzene-ring a molecular structure describable by early nineteenth-century chemistry (beforeKekule had his famous vision in 1865)? -- To ask whether an idea is creative or not (asopposed to how it came about) is to ask this sort of question. But whenever a structure is produced in practice, we can also ask what generative processesactually went on in its production. -- Did a particular geometer prove a particular theorem inthis way, or in that? Was the sonata composed by following a textbook on sonata-form? DidKekule rely on the then-familiar principles of chemistry to generate his seminal idea of thebenzene-ring, and if not how did he come up with it? -- To ask how an idea (creative orotherwise) actually arose, is to ask this type of question. We can now distinguish first-time novelty from impossibilist originality. A merely novel ideais one which can be described and/or produced by the same set of generative rules as areother, familiar, ideas. A genuinely original, or radically creative, idea is one which cannot.It follows that the ascription of (impossibilist) creativity always involves tacit or explicitreference to some specific generative system. It follows, too, that constraints -- far from being opposed to creativity -- make creativitypossible. To throw away all constraints would be to destroy the capacity for creativethinking. Random processes alone, if they happen to produce anything interesting at all, canresult only in first-time curiosities, not radical surprises. (As explained in Chapter 9,randomness can sometimes contribute to creativity -- but only in the context of backgroundconstraints.) Chapter 4: Maps of the Mind The definition of (impossibilist) creativity given in Chapter 3 implies that, with respect tothe usual mental processing in the relevant domain (chemistry, poetry, music ...), a creativeidea may be not just improbable, but impossible. How could it arise, then, if not by magic?And how can one impossible idea be more surprising, more creative, than another? If an act ofcreation is not mere combination, what is it? How can such creativity possibly happen? To understand this, we need to think of creativity in terms of the mapping, exploration, andtransformation of conceptual spaces. (The notion of a conceptual space is used informally inthis chapter; later, we see how conceptual spaces can be described more rigorously.) Aconceptual space is a style of thinking. Its dimensions are the organizing principles whichunify, and give structure to, the relevant domain. In other words, it is the generative systemwhich underlies that domain and which defines a certain range of possibilities: chess-moves,or molecular structures, or jazz-melodies. The limits, contours, pathways, and structure of a conceptual space can be mapped by mentalrepresentations of it. Such mental maps can be used (not necessarily consciously) to explore-- and to change -- the spaces concerned. Evidence from developmental psychology supports this view. Children's skills are at firstutterly inflexible. Later, imaginative flexibility results from "representationalredescriptions" (RRs) of (fluent) lower-level skills (Clark & Karmiloff-Smith, in press;Karmiloff-Smith, 1993). These RRs provide many-levelled maps of the mind, which are used bythe subject to do things he or she could not do before. For example, children need RRs of their lower-level drawing-skills in order to drawnon-existent, or "funny", objects: a one-armed man, or seven-legged dog. Lacking suchcognitive resources, a 4-year-old simply cannot spontaneously draw a one-armed man, and findsit very difficult even to copy a drawing of a two-headed man. But 10-year-olds can exploretheir own man-drawing skill, by using strategies such as distorting, repeating, omitting, ormixing parts. These imaginative strategies develop in a fixed order: children can change thesize or shape of an arm before they can insert an extra one, and long before they can give thedrawn man wings in place of arms. The development of RRs is a mapping-exercise, whereby people develop explicit mentalrepresentations of knowledge already possessed implicitly. Few AI-models of creativity contain reflexive descriptions of their own procedures, and/orways of varying them. Accordingly, most AI-models are limited to exploring their conceptualspaces, rather than transforming them (see Chapters 7 & 8). Conceptual spaces can be explored in various ways. Some exploration merely shows us somethingabout the nature of the relevant conceptual space which we had not explicitly noticed before.When Dickens described Scrooge as "a squeezing, wrenching, grasping, scraping, clutching,covetous old sinner", he was exploring the space of English grammar. He was reminding thereader (and himself) that the rules of grammar allow us to use seven adjectives before a noun.That possibility already existed, although its existence may not have been realized by thereader. Some exploration, by contrast, shows us the limits of the space, and identifies specificpoints at which changes could be made in one dimension or another. To overcome a limitation ina conceptual space, one must change it in some way. One may also change it, of course, withoutyet having come up against its limits. A small change (a "tweak") in a relatively superficialdimension of a conceptual space is like opening a door to an unvisited room in an existinghouse. A large change (a "transformation"), especially in a relatively fundamental dimension,is more like the instantaneous construction of a new house, of a kind fundamentally differentfrom (albeit related to) the first. A complex example of structural exploration and change can be found in the development ofpost-Renaissance Western music, based on the generative system known as tonal harmony. Fromits origins to the end of the nineteenth century, the harmonic dimensions of this space werecontinually tweaked to open up the possibilities (the rooms) implicit in it from the start.Finally, a major transformation generated the deeply unfamiliar (yet closely related) space ofatonality. Each piece of tonal music has a "home-key", from which it starts, from which (at first) it didnot stray, and in which it must finish. Reminders of the home-key were constantly provided, asfragments of scales, chords. or arpeggios. As time passed, the range of possible home-keysbecame increasingly well-defined (Bach's "Forty-Eight" was designed to explore, and clarify,the tonal range of the well-tempered keys). Soon, travelling along the path of the home-key alone became insufficiently challenging.Modulations between keys were then allowed, within the body of the composition. At first, onlya small number of modulations (perhaps only one, followed by its "cancellation") weretolerated, between strictly limited pairs of harmonically-related keys. Over the years, themodulations became more daring, and more frequent -- until in the late nineteenth centurythere might be many modulations within a single bar, not one of which would have appeared inearly tonal music. The range of harmonic relations implicit in the system of tonalitygradually became apparent. Harmonies that would have been unacceptable to the early musicians,who focussed on the most central or obvious dimensions of the conceptual space, becamecommonplace. Moreover, the notion of the home-key was undermined. With so many, and so daring, modulationswithin the piece, a "home-key" could be identified not from the body of the piece, but onlyfrom its beginning and end. Inevitably, someone (it happened to be Schoenberg) eventuallysuggested that the convention of the home-key be dropped altogether, since it no longerconstrained the composition as a whole. (Significantly, Schoenberg suggested new musicalconstraints: using every note in the chromatic scale, for instance.) However, exploring a conceptual space is one thing: transforming it is another. What is it totransform such a space? One example has just been mentioned: Schoenberg's dropping the home-key constraint to createthe space of atonal music. Dropping a constraint is a general heuristic, or method, fortransforming conceptual spaces. The deeper the generative role of the constraint in the systemconcerned, the greater the transformation of the space. Non-Euclidean geometry, for instance,resulted from dropping Euclid's fifth axiom. Another very general way of transforming conceptual spaces is to "consider the negative": thatis, to negate a constraint. One well-known instance concerns Kekule's discovery of thebenzene-ring. He described it like this: "I turned my chair to the fire and dozed. Again the atoms were gambolling before my eyes....[My mental eye] could distinguish larger structures, of manifold conformation; long rows,sometimes more closely fitted together; all twining and twisting in snakelike motion. Butlook! What was that? One of the snakes had seized hold of its own tail, and the form whirledmockingly before my eyes. As if by a flash of lightning I awoke." This vision was the origin of his hunch that the benzene-molecule might be a ring, a hunchthat turned out to be correct. Prior to this experience, Kekule had assumed that all organicmolecules are based on strings of carbon atoms. But for benzene, the valencies of theconstituent atoms did not fit. We can understand how it was possible for him to pass from strings to rings, as plausiblechemical structures, if we assume three things (for each of which there is independentpsychological evidence). First, that snakes and molecules were already associated in histhinking. Second, that the topological distinction between open and closed curves was presentin his mind. And third, that the "consider the negative" heuristic was present also. Takentogether, these three factors could transform "string" into "ring". A string-molecule is an open curve: one having at least one end-point (with a neighbour ononly one side). If one considers the negative of an open curve, one gets a closed curve.Moreover, a snake biting its tail is a closed curve which one had expected to be an open one.For that reason, it is surprising, even arresting ("But look! What was that?"). Kekule mighthave had a similar reaction if he had been out on a country walk and happened to see a snakewith its tail in its mouth. But there is no reason to think that he would have been stopped inhis tracks by seeing a Victorian child's hoop. A hoop is a hoop, is a hoop: no topologicalsurprises there. (No topological surprises in a snaky sine-wave, either: so two intertwinedsnakes would not have interested Kekule, though they might have stopped Francis Crick dead inhis tracks, a century later.) Finally, the change from open curves to closed ones is a topological change, which bydefinition will alter neighbour-relations. And Kekule was an expert chemist, who knew verywell that the behaviour of a molecule depends partly on how the constituent atoms arejuxtaposed. A change in atomic neighbour-relations is very likely to have some chemicalsignificance. So it is understandable that he had a hunch that this tail-biting snake-moleculemight contain the answer to his problem. Plausible though this talk of conceptual spaces may be, it is -- thus far -- largelymetaphorical. I have claimed that in calling an idea creative one should specify theparticular set of generative principles with respect to which it is impossible. But I have notsaid how the (largely tacit) knowledge of literary critics, musicologists, and historians ofart and science might be explicitly expressed within a psychological theory of creativity. Norhave I said how we can be sure that the mental processes specified by the psychologist reallyare powerful enough to generate such-and-such ideas from such-and-such structures. This is where computational psychology can help us. I noted above, for example, thatrepresentational redescription develops explicit mental representations of knowledge alreadypossessed implicitly. In computational terms, one could -- and Karmiloff-Smith does -- putthis by saying that knowledge embedded in procedures becomes available, after redescription,as part of the system's data-structures. Terms like procedures and data-structures are wellunderstood, and help us to think clearly about the mapping and negotiation of conceptualspaces. In general, whatever computational psychology enables us to say, it enables us to sayrelatively clearly. Moreover, computational questions can be supplemented by computational models. A functioningcomputer program, in effect, enables the system to use its maps not just to contemplate therelevant conceptual territory, but to explore it actively. So as well as saying what aconceptual space is like (by mapping it), we can get some clear ideas about how it is possibleto move around within it. In addition, those (currently, few) AI-models of creativity whichcontain reflexive descriptions of their own procedures, and ways of varying them, cantransform their own conceptual spaces, as well as exploring them. The following chapters, therefore, employ a computational approach in discussing the accountof creativity introduced in Chapters 1-4. Chapter 5: Concepts of Computation Computational concepts drawn from "classical" (as well as connectionist) AI can help us tothink about the nature, form, and negotiation of conceptual spaces. Examples of such concepts,most of which were inspired by pre-existing psychological notions in the first place, includethe following: generative system, heuristic (both introduced in previous chapters), effectiveprocedure, search-space, search-tree, knowledge representation, semantic net, scripts, frames,what-ifs, and analogical representation. Each of these concepts is briefly explained in Chapter 5, for people who (unlike BBS-readers)may know nothing about AI or computational psychology. And they are related to a wide range ofeveryday and historical examples -- some of which will be mentioned again in later chapters. My main aim, here, is to encourage the reader to use these concepts in considering specificcases of human thought. A secondary aim is to blur the received distinction between "the twocultures". The differences between creativity in art and science lie less in how new ideas aregenerated than in how they are evaluated, once they have arisen. The uses of computationalconcepts in this chapter are informal, even largely metaphorical. But in bringing acomputational vocabulary to bear on a variety of examples, the scene is set for more detailedconsideration (in Chapters 6-8) of some computer models of creativity. In Chapter 5, I refer very briefly to a few AI-programs (such as chess-machines and Schankianquestion-answering programs). Only two are discussed at any length: Longuet-Higgins' (1987)work on the perception of tonal harmony, and Gelernter's (1963) geometry (theorem-proving)machine. Longuet-Higgins' work is not intended as a model of musical creativity. Rather, it provides(in my terminology) a map of a certain sort of musical space: the system of tonal harmonyintroduced in Chapter 4. In addition, it suggests some ways of negotiating that space, for itidentifies musical heuristics that enable the listener to appreciate the structure of thecomposition. Just as speech perception is not the same as speech production, so appreciatingmusic is different from composing it. Nevertheless, some of the musical constraints that facecomposers working in this particular genre have been identified in this work. I also mention Longuet-Higgins' recent work on musical expressiveness, but do not describe ithere. In (Boden, in press), I say a little more about it. Without expression, music sounds"dead", even absurd. In playing the notes in a piano-score, for instance, pianists add suchfeatures as legato, staccato, piano, forte, sforzando, crescendo, diminuendo, rallentando,accelerando, ritenuto, and rubato. But how? Can we express this musical sensibility precisely?That is, can we specify the relevant conceptual space? Longuet-Higgins (in preparation), using a computational method, has tried to specify themusical skills involved in playing expressively. Working with two of Chopin'spiano-compositions, he has discovered some counterintuitive facts. For example, a crescendo isnot uniform, but exponential (a uniform crescendo does not sound like a crescendo at all, butlike someone turning-up the volume-knob on a wireless); similarly, a rallentando must beexponentially graded (in relation to the number of bars in the relevant section) if it is tosound "right". Where sforzandi are concerned, the mind is highly sensitive: as little as acentisecond makes a difference between acceptable and clumsy performance. This work is not a study of creativity. It does not model the exploration of a conceptualspace, never mind its transformation. But it is relevant because creativity can be ascribed toan idea (including a musical performance) only by reference to a particular conceptual space.The more clearly we can map this space, the more confidently we can identify and ask questionsabout the creativity involved in negotiating it. A pianist whose playing-style sounds"original", or even "idiosyncratic", is exploring and transforming the space of expressiveskills which Longuet-Higgins has studied. Gelernter's program, likewise, was not focussed on creativity as such. (It was not evenintended as a model of human psychology.) Rather, it was an early exercise in automaticproblem-solving, in the domain of Euclidean geometry. However, it is well known that theprogram was capable of generating a highly elegant proof (that the base-angles of an isoscelestriangle are equal), whose H-creator was the fourth-century mathematician Pappus. Or rather, it is widely believed that Gelernter's program could do this. The ambiguity, not tosay the mistake, arises because the program's proof is indeed the same as Pappus' proof, whenboth are written down on paper in the style of a geometry text-book. But the (creative) mentalprocesses by which Pappus did this, and by which the modern geometer is able to appreciate theproof, were very different from those in Gelernter's program -- which were not creative atall. Consider (or draw) an isosceles triangle ABC, with A at the apex. You are required to provethat the base-angles are equal. The usual method of proving this, which the program wasexpected to employ, is to construct a line bisecting angle BAC, running from A to D (a pointon the baseline, BC). Then, the proof goes as follows: Consider triangles ABD and ACD. AB = AC (given) AD = DA (common) Angle BAD = angle DAC (by construction) Therefore the two triangles are congruent (two sides and included angle equal) Therefore angle ABD = angle ACD. Q.E.D. By contrast, the Gelernter proof involved no construction, and went as follows: Consider triangles ABC and ACB. Angle BAC = angle CAB (common) AB = AC (given) AC = AB (given) Therefore the two triangles are congruent (two sides and included angle equal) Therefore angle ABC = angle ACB. Q.E.D. And, written down on paper, this is the outward form of Pappus' proof, too. The point, here, is that Pappus' own notes (as well as the reader's geometrical intuitions)show that in order to produce or understand this proof, a human being considers one and thesame triangle rotated (as Pappus put it, lifted up and replaced in the trace left behind byitself). There were thus two creative aspects of this proof. First, when "congruence" is inquestion, the geometer normally thinks of two entirely separate triangles (or, sometimes, twodistinct triangles having one side in common). Second, Euclidean geometry deals only withpoints, lines, and planes -- so one would expect any proof to be restricted to two spatialdimensions. But Pappus (and you, when you thought about this proof) imagined lifting androtating the triangle in the third dimension. He was, if you like, cheating. However, totransform a rule (an aspect of some conceptual space) is to change it: in effect, to cheat. Inthat sense, transformational creativity always involves cheating. Gelernter's geometry-program did not cheat -- not merely because it was too rigid to cheat inany way, but also because it could not have cheated in this way. It knew nothing of the thirddimension. Indeed, it had no visual, analogical, representation of triangles at all. Itrepresented a triangle not as a two-dimensional spatial form, but as a list of three letters(e.g. ABC) naming points in an abstract coordinate space. Similarly, it represented an angleas a list of three letters naming the vertex and one of the points on each of the two rays.Being unable to inspect triangles visually, it even had to prove that every differentletter-name for what we can see to be the same angle was equivalent. So it had to prove (forinstance) that angle XYZ is the same as angle ZYX, and angle BAC the same as angle CAB.Consequently, this program was incapable not only of coming up with Pappus' proof in the wayhe did, but even of representing such a proof -- or of appreciating its elegance andoriginality. Its mental maps simply did not allow for the lifting and replacement of trianglesin space (and it had no heuristics enabling it to transform those maps). How did it come up with its pseudo-Pappus proof, then? Treating the "ABC's" as (spatiallyuninterpreted) abstract vectors, it did a massive brute-search to find the proof. Since thisbrute search succeeded, it did not bother to construct any extra lines. This example shows how careful one must be in ascribing creativity to a person, and inanswering the second Lovelace question about a program. We have to consider not only theresulting idea, but also the mental processes which gave rise to it. Brute force search iseven less creative than associative (improbabilist) thinking, and problem-dimensions which canbe mapped by some systems may not be representable by others. (Analogously, a three-year oldnot showing flexible imagination in drawing a funny man: rather, she is showing incompetencein drawing an ordinary man.) It should not be assumed from the example of Pappus (or Kekule) that visual imagery is alwaysuseful in mapping and transforming one's ideas. An example is given of a problem for which avisual representation is almost always constructed, but which hinders solution. Where mentalmaps are concerned, visual maps are not always best. Chapter 6: Creative Connections This chapter deals with associative creativity: the spontaneous generation of new ideas,and/or novel combinations of familiar ideas, by means of unconscious processes of association.Examples include not only "mere associations" but also analogies, which may then beconsciously developed for purposes of rhetorical exposition or problem-solving. In Chapter 6,I discuss the initial association of ideas. (The evaluation and use of analogy are addressedin Chapter 7.) One of the richest veins of associative creativity is poetic imagery. I consider some specificexamples taken from Coleridge's poem The Ancient Mariner. For this poem (and also for hisKubla Khan), we have unusually detailed information about the literary sources of the imageryconcerned. The literary scholar John Livingston Lowes (1951) studied Coleridge's Notebookswritten while preparing for and writing the poem, and followed up every source mentioned there-- and every footnote given in each source. Despite the enormous quantity and range ofColeridge's reading, Lowes makes a subtle, and intuitively compelling, case in identifyingspecific sources for the many images in the poem. However, an intuitively compelling case is one thing, and an explicit justification ordetailed explanation is another. Lowes took for granted that association can happen (he usedColeridge's term: the hooks and eyes of memory), without being able to say just how thesehooks and eyes can come together. I argue that connectionism, and specifically PDP (paralleldistributed processing), can help us to understand how such unexpected associations arepossible. Among the relevant questions to which PDP-models offer preliminary answers are the following:How can ideas from very different sources (such as Captain Cook's diaries and Priestley'swritings on optics) be spontaneously thought of together? How can two ideas be merged toproduce a new structure, which shows the influence of both ancestor-ideas without being a mere"cut-and-paste" combination? How can the mind be "primed" (for instance, by the decision towrite a poem about a seaman), so that one will more easily notice serendipitous ideas? Why maysomeone notice -- and remember -- something fairly uninteresting (such as a word in a literarytext), if it occurs in an interesting context? How can a brief phrase conjure up from memoryan entire line or stanza, from this or some other poem? And how can we accept two ideas assimilar (the words "love" and "prove" as rhyming, for instance) in respect of a feature notidentical in both? The features of connectionist models which suggest answers to these questions are their powersof pattern-completion, graceful degradation, sensitization, multiple constraint-satisfaction,and "best-fit" equilibration. The computational processes underlying these features aredescribed informally in Chapter 6 (I assume that it is not necessary to do so forBBS-readers). The message of this chapter is that the unconscious, "insightful", associative aspects ofcreativity can be explained -- in outline, at least -- in computational terms. Connectionismoffers some specific suggestions about what sorts of processes may underlie the hooks and eyesof memory. This is not to say, however, that all aspects of poetry -- or even all poetic imagery -- canbe explained in this way. Quite apart from the hierarchical structure of natural languageitself, some features of a poem may require thinking of a type more suited (at present) tosymbolic models. For example, Coleridge's use of "The Sun came up upon the left" and "The Sunnow rose upon the right" as the opening-lines of two closely-situated stanzas enabled him toindicate to the reader that the ship was circumnavigating the globe, without having to detailall the uneventful miles of the voyage. (Compare Kubrick's use of the spinning thigh-boneturning into a space-ship, as a highly compressed history of technology, in his film 2001, ASpace Odyssey.) But these expressions, too, were drawn from his reading -- in this case, ofthe diaries of the very early mariners, who recorded their amazement at first experiencing thesunrise in the "wrong" part of the sky. Associative memory was thus involved in this poeticconceit, but it is not the entire explanation. Chapter 7: Unromantic Artists This chapter and the next describe and criticize some existing computer models of creativity.The separation into "artists" (Chapter 7) and "scientists" (Chapter 8) is to some extent anarbitrary rhetorical device. For example, analogy (discussed in Chapter 7) and induction andgenetic algorithms (both outlined in Chapter 8) are all relevant to creativity in arts andsciences alike. In these two chapters, the second and third Lovelace-questions -- aboutapparent computer-creativity -- are addressed at length. However, the first Lovelace question,relating to human creativity, is still the over-riding concern. The computer models of creativity discussed in Chapter 7 include: a series of programs whichproduce line-drawings (McCorduck, 1991); a jazz-improviser (Johnson-Laird, 1991); ahaiku-writer (Masterman & McKinnon Wood, 1968); two programs for writing stories (Klein etal., 1973; Meehan, 1981); and two analogy-programs (Chalmers, French, & Hofstadter, 1991;Holyoak & Thagard, 1989a, 1989b; Mitchell, 1993). In each case, the programmer has to try todefine the dimensions of the relevant conceptual space, and to specify ways of exploring thespace, so as to generate novel structures within it. Some evaluation, too, must be allowedfor. In the systems described in this chapter, the evaluation is built into the generativeprocedures, rather than being done post hoc. (This is not entirely unrealistic: althoughhumans can evaluate -- and modify -- their own ideas once they have produced them, they canalso develop domain-expertise such that most of their ideas are acceptable withoutmodification.) Sometimes, the results are comparable with non-trivial human achievements. Thus some of thecomputer's line-drawings are spontaneously admired, by people who are amazed when told theirprovenance. The haiku-program can produce acceptable poems, sometimes indistinguishable fromhuman-generated examples (however, this is due to the fact that the minimalist haiku-styledemands considerable projective interpretation by the reader). And the jazz-program can play-- composing its own chord-sequences, as well as improvising on them -- at about the level ofa moderately competent human beginner. (Another jazz-improviser, not mentioned in the book,plays at the level of a mediocre professional musician; unlike the former example, it startsout with significant musical structure provided to it "for free" by the human user (Hodgson,1990).) At other times, the results are clumsy and unconvincing, involving infelicities andabsurdities of various kinds. This often happens when stories are computer-generated. Here,many rich conceptual spaces have to be negotiated simultaneously. Quite apart from thechallenge of natural language generation, the model must produce sensible plots, takingaccount both of the motivation and action of the characters and of their common-senseknowledge. Where very simple plot-spaces, and very limited world-knowledge, are concerned, aprogram may be able (sometimes) to generate plausible stories. One, for example, produces Aesop-like tales, including a version of "The Fox and the Crow"(Meehan, 1981). A recent modification of this program (Turner, 1992), not covered in the book,is more subtle. It uses case-based reasoning and case-transforming heuristics to generatenovel stories based on familiar ones; and because it distinguishes the author's goals fromthose of the characters, it can solve meta-problems about the story as well as problems posedwithin it. But even this model's story-telling powers are strictly limited, compared withours. Models dealing with the interpretation of stories, and of concepts (such as betrayal) used instories, are also relevant here. Computational definitions of interpersonal themes and scripts(Abelson, 1973), programs that can answer questions about (simple) stories and models whichcan -- up to a point -- interpret motivational and emotional structures within a story (Dyer,1983) are all discussed. So, too, is a program that generates English text describing games of noughts-and-crosses(Davey, 1978). The complex syntax of the sentences is nicely appropriate to the structure ofthe particular game being described. Human writers, too, often use subtleties of syntax toconvey certain aspects of their story-lines. The analogy programs described in Chapter 7 are ACME and ARCS (Holyoak & Thagard, 1989a,1989b), and in the Preface to the paperback edition I add a discussion of Copycat (Chalmers etal., 1991; Mitchell, 1993), which I had originally intended to highlight in the main text. ACME and ARCS are an analogy-interpreter and an analogy-finder, respectively. Calling on asemantic net of over 30,000 items, to which items can be added by the user, these programs usestructural, semantic, and pragmatic criteria to evaluate analogies between concepts (whosestructure is pre-given by the programmers). Other analogy programs (e.g. Falkenhainer, Forbus,& Gentner, 1989) use structural and semantic similarity as criteria. But ARCS/ACME takesaccount also of the pragmatic context, the purpose for which the analogy is being sought. So aconceptual feature may be highlighted in one context, and downplayed in another. The contextmay be one of rhetoric or poetic imagery, or one of scientific problem-solving (ARCS/ACMEforms part of an inductive program that compares the "explanatory coherence" of rivalscientific theories (Thagard, 1992)). Examples of both types are discussed. The point of interest about Copycat is that it is a model of analogy in which the structure ofthe analogues is neither pre-assigned nor inflexible. The description of something can changeas the system searches for an analogy to it, and its "perception" of an analogue may bepermanently influenced by having seen it in a particular analogical relation to somethingelse. Many analogies in the arts and sciences can be cited, to show that the same is true ofthe human mind. Among the points of general interest raised in this chapter is the inability of these programs(Copycat excepted) to reflect on what they have done, or to change their way of doing it. For instance, the line-drawing program that draws human acrobats in broadly realistic poses isunable to draw one-armed acrobats. It can generate acrobats with only one arm visible, if onearm is occluded by another acrobat in front. But that there might be a one-armed (or asix-armed) acrobat is strictly inconceivable. The reason is that the program's knowledge ofhuman anatomy does not represent the fact that humans have two arms in a form which isseparable from its drawing-procedures or modifiable by "imaginative" heuristics. It does not,for instance, contain anything of the form "Number of arms: 2", which might then betransformed by a "vary the variable" heuristic into "Number of arms: 1". Much as thefour-year-old child cannot draw a "funny" one-armed man because she has not yet developed thenecessary RR of her own man-drawing skill, so this program cannot vary what it does because --in a clear sense -- it does not know what it is that it is doing. This failing is not shared by all current programs: some featured in the next chapter canevaluate their own ideas, and transform their own procedures, to some extent. Moreover, thisfailure is "bad news" only to those seeking a positive answer to the second and third Lovelacequestions. It is useful to anyone asking the first Lovelace question, for it underlines theimportance of the factors introduced in Chapter 4: reflexive mapping of thought, evaluation ofideas, and transformation of conceptual spaces. Chapter 8: Computer-Scientists Like analogy, inductive thinking occurs across both arts and science. Chapter 8 begins with adiscussion of the ID3 algorithm. This is used in many learning programs, including aworld-beater -- better than the human expert who "taught" it -- at diagnosing soybean diseases(Michalski & Chilausky, 1980). ID3 learns from examples. It looks for the logical regularities which underlie theclassification of the input examples, and uses them to classify new, unexamined, examples.Sometimes, it finds regularities of which the human experts were unaware, such as unknownstrategies for chess endgames (Michie & Johnston, 1984). In short, ID3 can not only definefamiliar concepts in H-creative ways, but can also define H-creative concepts. However, all the domain-properties it considers have to be specifically mentioned in theinput. (It does not have to be told just which input properties are relevant: in the chessend-game example, the chess-masters "instructing" the program did not know this.) That is,ID3-programs can restructure their conceptual space in P-creative -- and even H-creative --ways. But they cannot change the dimensions of the space, so as to alter its fundamentalnature. Another program capable of H-discovery is meta-DENDRAL, an early expert system devoted to thespectroscopic analysis of a certain group of organic molecules. The original program, DENDRAL,uses exhaustive search to describe all possible molecules made up of a given set of atoms, andheuristics to suggest which of these might be chemically interesting. DENDRAL uses only thechemical rules supplied to it, but meta-DENDRAL can find new rules about how these compoundsdecompose. It does this by identifying unfamiliar patterns in the spectrographs of familiarcompounds, and suggesting plausible explanations for them. For instance, if it discovers asmaller structure located near the point at which a molecule breaks, it may suggest that othermolecules containing that sub-structure may break at these points too. This program is H-creative, up to a point. It not only explores its conceptual space (usingevaluative heuristics and exhaustive search) but enlarges it too, by adding new rules. Itgenerates hunches, which have led to the synthesis of novel, chemically interesting,compounds. And it has discovered some previously unsuspected rules for analysing severalfamilies of organic molecules. However it relies on sophisticated theories built into it byexpert chemists (which is why its novel hypotheses, though sometimes false, are alwaysplausible). It casts no light on how those theories might have arisen in the first place. Some computational models of induction were developed with an eye to the history of science(and to psychology), rather than for practical scientific puzzle-solving. Their aim was not tocome up with H-creative ideas, but to P-create in the same way as human H-creators. Examplesinclude BACON, GLAUBER, STAHL, and DALTON (Langley, Simon, Bradshaw, & Zytkow, 1987), whoseP-creative activities are modelled on H-creative episodes recorded in the notebooks of humanscientists. BACON induces quantitative laws from empirical data. Its data are measurements of variousproperties at different times. It looks for simple mathematical functions defining invariantrelations between numerical data-sets. For instance, it seeks direct or inverseproportionalities between measurements, or between their products or ratios. It can definehigher-level theoretical terms, construct new units of measurement, and use mathematicalsymmetry to help find invariant patterns in the data. It can cope with noisy data, finding abest-fit function (within predefined limits). BACON has P-created many physical laws,including Archimedes' principle, Kepler's third law, Boyle's law, Ohm's law, and Black's law. GLAUBER discovers qualitative laws, summarizing the data by classifying things according to(non-measurable) observable properties. Thus it discovers relations between acids, alkalis,and bases (all identified in qualitative terms). STAHL analyses chemical compounds into theirelements. Relying on the data-categories presented to it, it has modelled aspects of thehistorical progression from phlogiston-theory to oxygen-theory. DALTON reasons about atoms andmolecular structure. Using early atomic theory, it generates plausible molecular structuresfor a given set of components (it could be extended to cover other componential theories, suchas particle physics or Mendelian genetics). These four programs have rediscovered many scientific laws. However, their P-creativity isshallow. They are highly data-driven, their discoveries lying close to the evidence. Theycannot identify relevance for themselves, but are "primed" with appropriate expectations.(BACON expects to find linear relationships, and rediscovered Archimedes' principle only afterbeing told that things can be immersed in known volumes of liquid and the resulting volumemeasured.) They cannot model spontaneous associations or analogies, only deliberate reasoning.Some can suggest experiments, to test hypotheses they have P-created, but they have no senseof the practices involved. They can learn, constructing P-novel concepts used to make furtherP-discoveries. But their discoveries are exploratory rather than transformational: they cannotfundamentally alter their own conceptual spaces. Some AI-models of creativity can do this, to some extent. For instance, the AutomaticMathematician (AM) explores and transforms mathematical ideas (Lenat, 1983). It does not provetheorems, or do sums, but generates "interesting" mathematical ideas (including expressionsthat might be provable theorems). It starts with 100 primitive concepts of set-theory (such asset, list, equality, and ordered pair), and 300 heuristics that can examine, combine,transform, and evaluate its concepts. One generates the inverse of a function (compare"consider the negative"). Others can compare, generalize, specialize, or find examples ofconcepts. Newly-constructed concepts are fed back into the pool. In effect, AM has hunches: its evaluation heuristics suggest which new structures it shouldconcentrate on. For example, AM finds it interesting whenever the union of two sets has asimply expressible property which is not possessed by either of them (a set-theoretic versionof the notion that emergent properties are interesting). Its value-judgments are often wrong.Nevertheless, it has constructed some powerful mathematical notions, including prime numbers,Goldbach's conjecture, and an H-novel theorem concerning maximally-divisible numbers (whichthe programmer had never heard of). In short, AM appears to be significantly P-creative, andslightly H-creative too. However, AM has been criticised (Haase, 1986; Lenat & Seely-Brown, 1984; Ritchie & Hanna,1984; Rowe &Partridge, 1993). Critics have argued that some heuristics were included to makecertain discoveries, such as prime numbers, possible; that the use of LISP provided AM withmathematical relevance "for free", since any syntactic change in a LISP expression is likelyto result in a mathematically-meaningful string; that the program's exploration was too oftenguided by the human user; and that AM had fixed criteria of interest, being unable to adaptits values. The precise extent of AM's creativity, then, is unclear. Because EURISKO has heuristics for changing heuristics, it can transform not only its stock ofconcepts but also its own processing-style. For example, one heuristic asks whether a rule hasever led to any interesting result. If it has not (but has been used several times), it willbe less often used in future. If it has occasionally been helpful, though usually worthless,it may be specialized in one of several different ways. (Because it is sometimes useful andsometimes not, the specializing-heuristic can be applied to itself.) Other heuristicsgeneralize rules, or create new rules by analogy with old ones. Using domain-specificheuristics to complement these general ones, EURISKO has generated H-novel ideas in geneticengineering and VLSI-design (one has been patented, so was not "obvious to a person skilled inthe art"). Other self-transforming systems described in this chapter are problem-solving programs basedon genetic algorithms (GAs). GA-systems have two main features. They all use rule-changingalgorithms (mutation and crossover) modelled on biological genetics. Mutation makes a randomchange in a single rule. Crossover mixes two rules, so that (for instance) the lefthandportion of one is combined with the righthand portion of the other; the break-points may bechosen randomly, or may reflect the system's sense of which rule-parts are the most useful.Most GA-systems also include algorithms for identifying the relatively successful rules, andrule-parts, and for increasing the probability that they will be selected for "breeding"future generations. Together, these algorithms generate a new system, better adapted to thetask. An example cited in the book is an early GA-program which developed a set of rules to regulatethe transmission of gas through a pipeline (Holland, Holyoak, Nesbitt, & Thagard, 1986). Itsdata were hourly measurements of inflow, outflow, inlet-pressure, outlet-pressure, rate ofpressure-change, season, time, date, and temperature. It altered the inlet-pressure to allowfor variations in demand, and inferred the existence of accidental leaks in the pipeline(adjusting the inflow accordingly). Although the pipeline-program discovered the rules for itself, the potentially relevantdata-types were given in its original list of concepts. How far that compromises itscreativity is a matter of judgment. No system can work from a tabula rasa. Likewise, theselectional criteria were defined by the programmer, and do not alter. Humans may be taughtevaluative criteria, too. But they can sometimes learn -- and adapt -- them for themselves. GAs, or randomizing thinking, are potentially relevant to art as well as to science --especially if the evaluation is done interactively, not automatically. That is, at eachgeneration the selection of items from which to breed for the next generation is done by ahuman being. This methodology is well-suited to art, where the evaluative criteria are notonly controversial but also imprecise -- or even unknown. Two recent examples (not mentionedin the book, but described in: Boden, in press) concern graphics (Sims, 1991; Todd & Latham,1993). Sims' aim is to provide an interactive environment for graphic artists, enabling themto generate otherwise unimaginable images. Latham's is to produce his own art-works, but hetoo uses the computer to generate images he could not have developed unaided. In a run of Sims' GA-system, the first image is generated at random. Then the program makesvarious independent random mutations in the image-generating rule, and displays the resultingimages. The human now chooses one image to be mutated, or two to be "mated", and the processis repeated. The program can transform its image-generating code (simple LISP-functions) inmany ways. It can alter parameters in pre-existing functions, combine or separate functions,or nest one function inside another (so many-levelled hierarchies can arise). Many of Sims' computer-generated images are highly attractive, even beautiful. Moreover, theyoften cause a deep surprise. The change(s) between parent and offspring are sometimes amazing.The one appears to be a radical transformation of the other -- or even something entirelydifferent. In short, we seem to have an example of impossibilist creativity. Latham's interactive GA-program is much more predictable. Its mutation operators can changeonly the parameters within the image-generating code, not the body of the function.Consequently, it never comes up with radical novelties. All the offspring in a givengeneration are obviously siblings, and obviously related to their parents. So the results ofLatham's system are less exciting than Sims'. But it is arguably even more relevant toartistic creativity. The interesting comparison is not between the aesthetic appeal of a typical Latham-image andSims-image, but between the discipline -- or lack of it -- which guides the exploration andtransformation of the relevant visual space. Sims is not aiming for particular types ofresult, so his images can be fundamentally transformed in random ways at every generation. ButLatham (a professional artist) has a sense of what forms he hopes to achieve, and specificaesthetic criteria for evaluating intermediate steps. Random changes at the margins areexploratory, and may provide some useful ideas. But fundamental transformations -- especially,random ones -- would be counterproductive. (If they were allowed, Latham would want to pickone and then explore its possibilities in a disciplined way.) This fits the account of (impossibilist) creativity given in Chapters 3 and 4. Creativityworks within constraints, which define the conceptual spaces with respect to which it isidentified. Maps or RRs (or LISP-functions) which describe the parameters and/or the majordimensions of the space can be altered in specific ways, to generate new, but related, spaces. Random changes are sometimes helpful, but only if they are integrated into the relevant style.Art, like science, involves discipline. Only after a space has been fairly thoroughly exploredwill the artist want to transform it in deeply surprising ways. A convincing computer-artistwould therefore need not only randomizing operators, but also heuristics for constraining itstransformations and selections in an aesthetically acceptable fashion. In addition, it wouldneed to make its aesthetic selections (and perhaps guiding recommendations) for itself. And,to be true to human creativity, the evaluative rules should evolve also (Elton, 1993). Chapter 9: Chance, Chaos, Randomness, Unpredictability Unpredictability is often said to be the essence of creativity. And creativity is, bydefinition, surprising. But unpredictability is not enough. At the heart of creativity, asprevious chapters have shown, lie constraints: the very opposite of unpredictability.Constraints and unpredictability, familiarity and surprise, are somehow combined in originalthinking. In this chapter, I distinguish various senses of "chance", "chaos", "randomness", and"unpredictability". I also argue that a scientific explanation need not imply eitherdeterminism or predictability, and that even deterministic systems may be unpredictable.Below, it will suffice to mention a number of different ways in which unpredictability canenter into creativity. The first follows from the fact that creative constraints do not determine everything aboutthe newly-generated idea. A style of thinking typically allows for many points at which two ormore alternatives are possible. Several notes may be both melodious and harmonious; many wordsrhyme with moon; and perhaps there could be a ring-molecule with three, or five, atoms in thering? At these points, some specific choice must be made. Likewise, many exploratory andtransformational heuristics may be potentially available at a certain time, in dealing with agiven conceptual space. But one or other must be chosen. Even if several heuristics can beapplied at once (like parallel mutations in a GA-system), not all possibilities can besimultaneously explored. The choice has to be made, somehow. Occasionally, the choice is random, or as near to random as one can get. So it may be made bythrowing a dice (as in playing Mozart's aleatory music); or by consulting a table of randomnumbers (as in the jazz-program); or even, possibly, as a result of some sudden quantum-jumpinside the brain. There may even be psychological processes akin to GA-mechanisms, producingnovel ideas in human minds. More often, the choice is fully determined, by something which bears no systematic relation tothe conceptual space concerned. (Some examples are given below.) Relative to that style ofthinking, the choice is made randomly. Certainly, nothing within the style itself could enableus to predict its occurrence. In either case, the choice must somehow be skilfully integrated into the relevant mentalstructure. Without such disciplined integration, it cannot lead to a positively valued,interesting, idea. With the help of this mental discipline, even flaws and accidents may beput to creative use. For instance, a jazz-drummer suffering from Tourette's syndrome issubject to sudden, uncontrollable, muscular tics, even when he is drumming. As a result, hisdrumsticks sometimes make unexpected sounds. But his musical skill is so great that he canwork these supererogatory sounds into his music as he goes along. At worst, he "covers up" forthem. At best, he makes them the seeds of unusual improvisations which he could not otherwisehave thought of. One might even call the drummer's tics serendipitous. Serendipity is the unexpected finding ofsomething one was not specifically looking for. But the "something" has to be something whichwas wanted, or at least which can now be used. Fleming's discovery of the dirty petri-dish,infected by Penicillium spores, excited him because he already knew how useful a bactericidalagent would be. Proust's madeleine did not answer any currently pressing question, but itaroused a flood of memories which he was able to use as the trigger of a life-long project.Events such as these could not have been foreseen. Both trigger and triggering wereunpredictable. Who was to say that the dish would be left uncovered, and infected by thatparticular organism? And who could say that Proust would eat a madeleine on that occasion?Even if one could do this (perhaps the laboratory was always untidy, and perhaps Proust wasaddicted to madeleines), one could not predict the effect the trigger would have on theseindividual minds. This is so even if there are no absolutely random events going on in our brains. Chaos theoryhas taught us that fully deterministic systems can be, in practice, unpredictable. Ourinescapable ignorance of the initial conditions means that we cannot forecast the weather,except in highly general (and short-term) ways. The inner dynamics of the mind are morecomplex than those of the weather, and the initial conditions -- each person's individualexperiences, values, and beliefs -- are even more varied. Small wonder, then, if we cannotfully foresee the clouds of creativity in people's minds. To some extent, however, we can. Different thinkers have differing individual styles, whichset a characteristic stamp on all their work in a given domain. Thus Dr. Johnson complained,"Who but Donne would have compared a good man to a telescope?". Authorial signatures arelargely due to the fact that people can employ habitual ways of making "random" choices. Theremay be nothing to say, beforehand, how someone will choose to play the relevant game. Butafter several years of practice, their "random" choices may be as predictable as anything inthe basic genre concerned. More mundane examples of creativity, which are P-creative but not H-creative, can sometimes bepredicted -- and even deliberately brought about. Suppose your daughter is having difficultymastering an unfamiliar principle in her physics homework. You might fetch a gadget thatembodies the principle concerned, and leave it on the kitchen-table, hoping that she will playaround with it and realise the connection for herself. Even if you have to drop a few hints,the likelihood is that she will create the central idea. Again, Socratic dialogue helps peopleto explore their conceptual spaces in (to them) unexpected ways. But Socrates himself, likethose taking his role today, knew what P-creative ideas to expect from his pupils. We cannot predict creative ideas in detail, and we never shall be able to do so. Humanexperience is too richly idiosyncratic. But this does not mean that creativity isfundamentally mysterious, or beyond scientific understanding. Chapter 10: Elite or Everyman? Creativity is not a single capacity, and nor is it a special one. It is an aspect ofintelligence in general, which involves many different capacities: noticing, remembering,seeing, speaking, classifying, associating, comparing, evaluating, introspecting, and thelike. Chapter 10 offers evidence for this view, drawing on the work of Perkins (1981) and alsoon computational work of various kinds. For example, Kekule's description of "long rows, twining and twisting in snakelike motion",where "one of the snakes had seized hold of its own tail", assumes everyday powers of visualinterpretation and analogy. These capacities are normally taken for granted in discussions ofKekule's H-creativity, but they require some psychological explanation. Relevant computationalwork on low-level vision suggests that Kekule's imagery was grounded in certain specific, anduniversal, visual capacities -- including the ability to identify lines and end-points. (Hishunch, by contrast, required special expertise. As remarked in Chapter 4, only a chemist couldhave realized the potential significance of the change in neighbour-relations caused by thecoalescence of end-points, or the "snake" which "seized hold of its tail".) Similarly, Mozart's renowned musical memory, and his reported capacity for hearing a wholesymphony "all at once", can be related to computational accounts of powers of memory andcomprehension common to us all. Certainly, his musical expertise was superior in many ways. Hehad a better grasp of the conceptual spaces concerned, and a better understanding -- bettereven than Salieri's -- of how to explore them so as to locate their farthest nooks andcrannies. (Unlike Haydn, for example, he was not a composer who made adventuroustransformations). But much of Mozart's genius may have lain in the better use, and the vastlymore extended practice, of facilities we all share. Much -- but perhaps not all. Possibly, there was something special about Mozart's brain whichpredisposed him to musical genius (Gardner, 1983). However, we have little notion, at present,of what this could be. It may have been some cerebral detail which had the emergent effect ofgiving him greater musical powers. For example, the jazz-improvisation program described inChapter 7 employed only very simple rules to improvise, because its short-term memory wasdeliberately constrained to match the limited STM of people. Human jazz-musicians cannotimprovise hierarchically nested chord-sequences "on the fly", but have to compose (ormemorize) them beforetimes. A change in the range of STM might enable someone to improvise andappreciate musical structures of a complexity not otherwise intelligible. But this musicallysignificant change might be due to an apparently "boring" feature of the brain. Many other examples of creativity (drawn, for instance, from poetry, painting, music, andchoreography) are cited in this chapter. They all rely on familiar capacities for theireffect, and arguably for their occurrence too. We appreciate them intuitively, and normallytake their accessibility -- and their origins -- for granted. But psychological explanationsin computational terms may be available, at least in outline. The role of motivation and emotion is briefly mentioned, but is not a prime theme. This is notbecause motivation and emotion are in principle outside the reach of a computationalpsychology. Some attempts have been made to bring these matters within a computational accountof the mind (e.g. Boden, 1972; Sloman, 1987). But such attempts provide outline sketchesrather than functioning models. Still less is it because motivation is irrelevant tocreativity. But the main topic of the book is how (not why) novel ideas arise in human minds. Chapter 11: Of Humans and Hoverflies The final chapter focusses on two questions. One is the fourth Lovelace question: could acomputer really be creative? The other is whether any scientific explanation of creativity,whether computational or not, would be dehumanizing in the sense of destroying our wonder atit -- and at the human mind in general. With respect to the fourth Lovelace question, the answer "No" may be defended in at least fourdifferent ways. I call these the brain-stuff argument, the empty-program argument, theconsciousness argument, and the non-human argument. Each of these applies to intelligence (andintentionality) in general, not just to creativity in particular. The brain-stuff argument (Searle, 1980) claims that whereas neuroprotein is a kind of stuffwhich can support intelligence, metal and silicon are not. This empirical claim is conceivablycorrect, but we have no specific reason to believe it. Moreover, the associated claim -- thatit is intuitively obvious that neuroprotein can support intentionality and that metal andsilicon cannot -- must be rejected. Intuitively speaking, that neuroprotein supports intelligence is utterly mysterious: how couldthat grey mushy stuff inside our skulls have anything to do with intentionality? Insofar as weunderstand this, we do so because of various functions that nervous tissue makes possible (asthe sodium pump enables action potentials, or "messages", to pass along an axon). Any materialsubstrate capable of supporting all the relevant functions could act as the embodiment ofmind. Whether neurochemistry describes the only such substrate is an empirical question, notto be settled by intuitions. The empty-program argument is Searle's (1980) claim that a computational psychology cannotexplain understanding, because programs are all syntax and no semantics: their symbols areutterly meaningless to the computer itself. I reply that a computer program, when running in acomputer, has proto-semantic (causal) properties, in virtue of which the computer does things-- some of which are among the sorts of thing which enable understanding in humans and animals(Boden, 1988, ch. 8; Sloman, 1986). (This is not to say that any computer-artefact couldpossess understanding in the full sense, or what I have termed "intrinsic interests", groundedin evolutionary history (Boden, 1972).) The consciousness argument is that no computer could be conscious, and therefore -- sinceconsciousness is needed for the evaluation phase, and even for much of the preparation phase-- no computer can be creative. I reply that it's not obvious that evaluation must be carriedout consciously. A creative computer might recognize (evaluate) its creative ideas by usingrelevant reflexive criteria without also having consciousness. Moreover, some aspects ofconsciousness can be illuminated by a computational account, although admittedly "qualia"present an unsolved problem. The question must remain open -- not just because we do not knowthe answer, but because we do not clearly understand how to ask the question. According to the non-human argument, to regard computers as truly intelligent is not a merefactual mistake, but a moral absurdity: only members of the human, or animal, community shouldbe granted moral and epistemological consideration (of their interests and opinions). If weever agreed to remove all the scare-quotes around the psychological words we use in describingcomputers, so inviting them to join our human community, we would be committed to respectingtheir goals and judgments. This would not be a purely factual matter, but one of moral andpolitical choice -- about which it is impossible to legislate now. In short, each of the four negative replies to the last Lovelace question is challengeable.But even someone who does accept a negative answer here can consistently accept positiveanswers to the first three Lovelace questions. The main argument of the book remainsunaffected. The second theme of this final chapter is the question whether, where creativity is inquestion, scientific explanation in general should be spurned. Many people, from Blake toRoszak, have seen the natural sciences as dehumanizing in various ways. Three are relevanthere: the ignoring of mentalistic concepts, the denial of cherished beliefs, and thedestructive demystification of some valued phenomena. The natural sciences have had nothing to say about psychological phenomena as such; andscientifically-minded psychologists have often conceptualized them in reductionist (e.g.behaviourist, or physiological) terms. To ignore something is not necessarily to deny it. But,given the high status of the natural sciences, the fact that they have not dealt with the mindhas insidiously downplayed its importance, if not its very existence. This charge cannot be levelled at computational psychology, however. Intentional concepts,such as representation, lie at the heart of it, and of AI. Some philosophers claim that thesesciences have no right to use such terms. Even so, they cannot be accused of deliberatelyignoring intentional phenomena, or of rejecting intentionalist vocabulary. The second charge of dehumanization concerns what science explicitly denies. Some scientifictheories have rejected comforting beliefs, such as geocentrism, special creation, or rationalself-control. But a scientific psychology need not -- and a computational psychology does not-- deny creativity, as astronomy denies geocentrism. On the contrary, the preceding chaptershave acknowledged creativity again and again. Even to say that it rests on universal featuresof human minds is not to deny that some ideas are surprising, and special, requiringexplanation of how they could possibly arise. However, the humanist's worry concerns not only denial by rejection, but also denial byexplanation. The crux of the third type of anti-scientific resistance is the feeling thatscientific explanation of any kind must drive out wonder: that to explain something is tocease to marvel at it. Not only do we wonder at creativity, but positive evaluation isessential to the concept. So it may seem that to explain creativity is insidiously todowngrade it -- in effect, to deny it. Certainly, many examples can be given where understanding drives out wonder. For instance, wemay marvel at the power of the hoverfly to fly to its mate hovering nearby (so as to mate inmid-air). Many people might be tempted to describe the hoverfly's activities in terms of itsgoals and beliefs, and perhaps even its determination in going straight to its mate withoutany coyness or prevarication. How wonderful is the mind of the humble hoverfly! In fact, the hoverfly's flight-path is determined by a simple and inflexible rule, hardwiredinto its brain. This rule transforms a specific visual signal into a specific muscularresponse. The fly's initial change of direction depends on the particular approach-anglesubtended by the target-fly. The creature, in effect, always assumes that the size andvelocity of the seen target (which may or may not be a fly) are those corresponding tohoverflies. When initiating a new flight-path, the fly's angle of turn is selected on thisrigid, and fallible, basis. Moreover, the fly's path cannot be adjusted in midflight, therebeing no way in which it can be influenced by feedback from the movement of the target animal. This evidence must dampen the enthusiasm of anyone who had marvelled at the psychologicalsubtlety of the hoverfly's behaviour. The insect's intelligence has been demystified with avengeance, and it no longer seems worthy of much respect. One may see beauty in theevolutionary principles that enabled this simple computational mechanism to develop, or in thebiochemistry that makes it function. But the fly itself cannot properly be described inanthropomorphic terms. Even if we wonder at evolution, and at insect-neurophysiology, we canno longer wonder at the subtle mind of the hoverfly. Many people fear that this disillusioned denial of intelligence in the hoverfly is a foretasteof what science will say about our minds too. A few "worrying" examples can indeed be given:for instance, think of how perceived sexual attractiveness turns out to relate to pupil-size.In general, however, this fear is mistaken. The mind of the hoverfly is much less marvellousthan we had imagined, so our previous respect for the insect's intellectual prowess is shownup as mere ignorant sentimentality. But computational explanations of thinking can increaseour respect for human minds, by showing them to be much more complex and subtle than we hadpreviously recognized. Consider, for instance, the many different ways (some are sketched in Chapters 4 and 5) inwhich Kekule could have seen snakes as suggesting ring-molecules. Think of the richanalogy-mapping in Coleridge's mind, which drew on naval memoirs, travellers' tales, andscientific reports to generate the imagery of The Ancient Mariner (Chapter 6). Bear in mindthe mental complexities (outlined in Chapter 7) of generating an elegant story-line, orimprovising a jazz-melody. And remember the many ways in which random events (the mutationsdescribed in Chapter 8, or the serendipities cited in Chapter 9) may be integrated intopre-existing conceptual spaces with creative effect. Writing about Coleridge's imagery, Livingston Lowes said: "I am not forgetting beauty. It isbecause the worth of beauty is transcendent that the subtle ways of the power that achieves itare transcendently worth searching out." His words apply not only to literary studies ofcreativity, but to scientific enquiry too. A scientific psychology, whether computational ornot, allows us plenty of room to wonder at Mozart, or at our friends' jokes. Psychology leavespoetry in place. Indeed, it adds a new dimension to our awe on encountering creative ideas,for it helps us to see the richness, and yet the discipline, of the underlying mentalprocesses. To understand, even to demystify, is not necessarily to denigrate. A scientific explanation ofcreativity shows how extraordinary is the ordinary person's mind. We are, after all, humans --not hoverflies. -==REFERENCES==- Abelson, R. P. (1973) The structure of belief systems. In: Computer models of thought andlanguage, eds. R. C. Schank & K. M. Colby (pp. 287-340). Boden, M. A. (1972) Purposive explanation in psychology. Cambridge, Mass.: Harvard UniversityPress. Boden, M. A. (1988) Computer models of mind: Computational approaches in theoreticalpsychology. Cambridge: Cambridge University Press. Boden, M. A. (1990) The creative mind: Myths and mechanisms. London: Weidenfeld & Nicolson.(Expanded edn., London: Abacus, 1991.) Boden, M. A. (in press) What is creativity? In: Dimensions of creativity, ed. M. A. Boden.Cambridge, Mass.: MIT Press. Brannigan, A. (1981) The social basis of scientific discoveries. Cambridge: CambridgeUniversity Press. Chalmers, D. J., French, R. M., & Hofstadter, D. R. (1991) High-level perception,representation, and analogy: A critique of artificial intelligence methodology. CRCC TechnicalReport 49. Center for Research on Concepts and Cognition, Indiana University, Bloomington,Indiana. Clark, A., & Karmiloff-Smith, A. (in press) The cognizer's innards. Mind and Language. Davey, A. (1978) Discourse production: A computer model of some aspects of a speaker.Edinburgh: Edinburgh University Press. Dyer, M. G. (1983) In-depth understanding: A computer model of integrated processing fornarrative comprehension. Cambridge, Mass.: MIT Press. Elton, M. (1993) Towards artificial creativity. In: Proceedings of LUTCHI symposium oncreativity and cognition, ed. E. Edmonds (un-numbered). Loughborough: University ofLoughborough. Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989) The structure-mapping engine: Algorithmand examples, AI Journal, 41, 1-63. Gardner, H. (1983) Frames of mind: The theory of multiple intelligences. London: Heinemann. Gelernter, H. L. (1963) Realization of a geometry-theorem provi machine. In: Computers andthought, eds. E. A. Feigenbaum & J. Feldman, pp. 134-152. New Yo}McGraw-Hill. Haase, K. W. (1986) Discovery systems. Proc. European Conf. on AI, 1, 546-555. Hadamard, J. (1954) An essay on the psychology of invention in the mathematical field. NewYork: Dover. Hodgson, P. (1990) Understanding computing, cognition, and creativity. MSc thesis, Universityof the West of England. Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986) Induction: Processesof inference, learning, and discovery. Cambridge, Mass.: MIT Press. Holyoak, K. J., & Thagard, P. R. (1989a) Analogical mapping by constraint satisfaction.Cognitive Science, 13, 295-356. Holyoak, K. J., & Thagard, P. R. (1989b) A computational model of analogical problem solving.In: S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 242-266).Cambridge: Cambridge University Press. Johnson-Laird, P. N. (1991) Jazz improvisation: A theory at the computational level. In:Representing musical structure, eds. P. Howell, R. West, & I. Cross. (pp. 291-326). London:Academic Press. Karmiloff-Smith, A. (1993) Beyond modularity: A developmental perspective on cognitivescience. Cambridge, Mass.: MIT Press. Klein, S., Aeschlimann, J. F., Balsiger, D. F., Converse, S. L., Court, C., Foster, M., Lao,R., Oakley, J. D., & Smith, J. (1973) Automatic novel writing: A status report. TechnicalReport 186. Madison, Wis.: University of Wisconsin Computer Science Dept. Koestler, A. (1975) The act of creation. London: Picador. Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987) Scientific discovery:Computational explorations of the creative process. Cambridge, Mass.: MIT Press. Lenat, D. B. (1983) The role of heuristics in learning by discovery: Three case studies. In:Machine learning: An artificial intelligence approach, eds. R. S. Michalski, J. G. Carbonell,& T. M. Mitchell (pp. 243-306). Palo Alto, Calif.: Tioga. Lenat, D. B., & Seely-Brown, J. (1984) Why AM and EURISKO appear to work. AI Journal, 23,269-94. Livingston Lowes, J. (1951) The road to Xanadu: A study in the ways of the imagination.London: Constable. Longuet-Higgins, H. C. (1987) Mental processes: Studies in cognitive science. Cambridge,Mass.: MIT Press. Longuet-Higgins, H. C. (in preparation) Musical aesthetics. In: Artificial intelligence andthe mind: New breakthroughs or dead ends?, eds. M. A. Boden & A. Bundy. London: Royal Society& British Academy (to appear). McCorduck, P. (1991) Aaron's code. San Francisco: W. H. Freeman. Masterman, M., & McKinnon Wood, R. (1968) Computerized Japanese haiku. In: CyberneticSerendipity, ed. J. Reichardt (pp. 54-5). London: Studio International. Meehan, J. (1981) TALE-SPIN. In: Inside computer understanding: Five programs plus miniatures,eds. R. C. Schank & C. J. Riesbeck (pp. 197-226). Hillsdale, NJ: Erlbaum. Michie, D., & Johnston, R. (1984) The creative computer: Machine intelligence and humanknowledge. London: Viking. Michalski, R. S., & Chilausky, R. L. (1980) Learning by being told and learning from examples:an experimental comparison of two methods of knowledge acquisition in the context ofdeveloping an expert system for soybean disease diagnosis. International Journal of PolicyAnalysis and Information Systems, 4, 125-61. Mitchell, M. (1993) Analogy-making as perception. Cambridge, Mass.: MIT Press. Perkins, D. N. (1981) The mind's best work. Cambridge, Mass.: Harvard University Press. Poincare, H. (1982) The foundations of science: Science and hypothesis, The value of scienc