turing's test andconscious thoughtrv906kh3783/rv906...20, 1947.thelecturerwas...

22
Artificial Intelligence 60 (1993) 1-22 Elsevier 1 . i ARTINT 933 Turing's Test and conscious thought Donald Michie The Turing Institute, George House, 36 North Hanover Glasgow Gl 2AD, Scotland UK Received July 1990 Revised May 1992 Abstract Michie, D., Turing's Test and conscious thought, Artificial Intelligence 60 (1993) 1-22. Over forty years ago A.M. Turing proposed a test for intelligence in machines. Based as it is solely on an examinee's verbal responses, the Test misses some important components of human thinking. To bring these manifestations within its scope, the Turing Test would require substantial extension. Advances in the application of AI methods in the design of improved human-computer interfaces are now focussing attention on machine models of thought and knowledge from the altered standpoint of practical utility. Introduction its text is now available in a collection of papers published by MIT Press (edited by Carpenter and Doran [4]), there is little awareness of a remarkable lecture delivered to the London Mathematical Society on February 20, 1947. The lecturer was Alan Turing. His topic was the nature of program- mable digital computers, taking as his exemplar the "Automatic Computing Engine" (ACE) then under construction at the National Physical Laboratory. At that time no stored-program machine was yet operational anywhere in the world. So each one of his deeply considered points blazes a trail logical equivalence to the Universal Turing Machine of hardware constructions such as the ACE, the uses of variable-length precision, the need for large paged memories, the nature of "while" loops, the idea of the subroutine, the possibility of remote access, the automation of I/O, the concept of the Correspondence to: D. Michie, The Turing Institute, George House, 36 North Hanover Glasgow Gl 2AD, UK. © 1993 Elsevier Science Publishers B.V. All rights reserved

Upload: others

Post on 06-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Artificial Intelligence 60 (1993) 1-22Elsevier

1

.

i

ARTINT 933

Turing's Test and consciousthought

Donald MichieThe Turing Institute, George House, 36 North Hanover

Street,

Glasgow Gl 2AD,Scotland UK

Received July 1990Revised May 1992

Abstract

Michie, D., Turing's Test and conscious thought, Artificial Intelligence 60 (1993) 1-22.

Over forty years ago A.M. Turing proposed a test for intelligence in machines.Based as it issolely on an examinee's verbal responses, the Test misses some important components ofhuman thinking. To bring these manifestations within its scope, the Turing Test wouldrequire substantial extension. Advances in the application of AI methods in the design ofimproved human-computer interfaces are now focussing attention on machine models ofthought and knowledge from the altered standpoint of practical utility.

Introduction

its text is now available in a collection ofpapers published by MITPress (edited by Carpenter and Doran [4]), there is little awareness of aremarkable lecture delivered to the London Mathematical Society onFebruary20, 1947. The lecturer was Alan Turing. His topic was the nature of program-mable digital computers, taking as his exemplar the "Automatic ComputingEngine" (ACE) then under construction at the National Physical Laboratory.At that time no stored-program machine was yet operational anywhere in theworld. So each one of his deeply considered points blazes a trail—logicalequivalence to the Universal Turing Machine of hardware constructions such asthe ACE, the uses of variable-length precision, the need for large pagedmemories, the nature of "while" loops, the idea of the subroutine, thepossibility of remote access, the automation of I/O, the concept of the

Correspondence to: D. Michie, The Turing Institute, George House, 36 North Hanover

Street,

Glasgow Gl 2AD,

Scotland,

UK.

0004-3702/93/$06.00

© 1993—Elsevier Science Publishers B.V. All rights reserved

Page 2: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

2 D. Michie

operating system. Turing then considers the eventual possibility of automatingthe craft of programming, itself scarcely yet invented. He further discusses theforms of predictable resistance to such automation among those whom todaywe call DP (data processing) staff:

They may be unwilling to let their jobs be stolen from them in thisway. In that case they will surround the whole of their work withmystery and make excuses, couched in well chosen gibberish,whenever any dangerous suggestions are made.

We then read

This topic [of automatic programming] naturally leads to the ques-tion as to how far it is possible in principle for acomputing machineto simulate human activities . . .

and the lecturer launches into the theme which we know today as artificialintelligence (AI).

Turing put forward three positions:

Position 1. Programming could be done in symbolic logic and would thenrequire the construction of appropriate interpreters.

Position 2. Machine learning is needed so that computers can discover newknowledge inductively from experience as well as deductively.Position 3. Humanised interfaces are required to enable machines to adapt topeople, so as to acquire knowledge tutorially.

I reproduce relevant excerpts below, picking out particular phrases in boldtype.

1 . Turing on logic programming

I expect that digital computing machines will eventually stimulate aconsiderable interest in symbolic logicand mathematical philosophy;... in principle one should be able to communicate in any symboliclogic, provided that the machine were given instruction tables whichwould enable it to interpret that logical system.

2. Turing on machine learning

Let us suppose we have set up a machine with certain initialinstruction tables, so constructed that these tables might on occa-sion, if good reason arose, modify those tables. One can imagine

Page 3: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 3

that after the machine had been operating for some time theinstructions would have been altered out of all recognition, butnevertheless still be such that one would have to admit that themachine was still doing very worthwhile calculations. Possibly itmight still be getting results of the type desired when the machinewas first set up, but in a much more efficient manner. In such a caseone would have to admit that the progress of the machine had notbeenforeseen when its original instructions were put in. It would belike a pupil who had learnt much from his master, but had addedmuch more by his own work.

3. Turing on cognitive compatibility

No man adds very much to the body ofknowledge; why should weexpect more of a machine? Putting the same point differently, themachine must be allowed to have contact with human beings inorder that it may adapt itself to their standards.

AFs inventory of fundamental ideas due to Turing would not be completewithout the proposal which he put forward three years later in the philosophi-cal journal Mind, known today as the "Turing Test". The key move was todefine intelligence operationally, i.e., in terms of the computer's ability, testedover a typewriter link, to sustain a simulation of an intelligent human whensubjected to questioning. Published accounts usually overstate the scopeproposed by Turing for his "imitation game", presenting the aim of themachine's side as successful deceit of the interrogator throughout a lengthydialogue. But Turing's original imitation game asked only for a rather weaklevel of success over a relatively short period of interrogation.

I believe that in about fifty years' time it will be possible to

programme computers, with a storage capacity of about 10 , tomake them play the imitation game so well that an averageinterrogator will not have more than 70 per cent chance of makingthe right identification [as between human and computer] after fiveminutes of questioning.

Presumably the interrogator has no more than 2\ minutes of question-puttingto bestow on each of the remote candidates, whose replies are not time-limited. We have to remind ourselves that—in spite of subsequent misstate-ments, repeated and amplified in a recent contribution to Mind by RobertFrench [9]—the question which Turing wished to place beyond reasonabledispute was not whether a machine might think at the level of an intelligenthuman. His proposal was for a test of whether a machine could be said to thinkat all.

Page 4: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

4 D. Michie

Solipsism and the charmed circle

Notwithstanding the above, French's paper has important things to sayconcerning the role played by "subcognitive" processes in intelligent thought.He points out that "... any sufficiently broad set of questions making up aTuring Test would necessarily contain questions that rely on subcognitiveassociations for their answers."

The scientific study of cognition has shown that some thought processes areintimately bound up with consciousness while others take place subliminally.Further, as French reminds us, the two are interdependent. Yet other con-tributors to the machine intelligence discussion often imply a necessary associa-tion between consciousness and all forms of intelligence, as a basis for claimingthat a computer program could not exhibit intelligence of any kind or degree.Thus John R. Searle [24] recently renewed his celebrated "Chinese Room"argument against the possibility of designing a program that, when run on asuitable computer, would show evidence of "thinking". After his openingquestion "Can a machine think?", Searle adds: "Can a machine have consciousthoughts in exactly the same sense that you and I have?" Since a computerprogram does nothing but shuffle symbols, so the implication goes, one cannotreally credit it with the kinds of sensations, feelings, and impulses whichaccompany one's own thinking—e.g. the excitement of following an evidentialclue, the satisfaction of following a lecturer's argument or the "Aha!" ofsubjective comprehension. Hence howeverbrilliant and profound its responsesin the purely intellectual sense recognised by logicians, a programmed comput-ing system can never be truly intelligent. Intelligence would imply that suitablyprogrammed computers can be conscious, whereas we "know" that theycannot be.

In his 1950 Mind paper Turing considered arguments of this kind, citingJefferson's Lister Oration for 1949:

Not until a machine can write a sonnet or compose a concertobecause of thoughts and emotions felt, and not by the chance fall ofsymbols, could we agree that machine equals brain—that is, notonly write it but know that it had written it. No mechanism couldfeel (and not merely artificiallysignal, an easy contrivance)pleasureat successes, griefwhen its valves fuse, be warmed by flattery, bemade miserable by its mistakes, be charmed by sex, be angry ordepressed when it cannot get what it wants.

Jefferson's portrayal of conscious thought and feeling here compounds twoaspects which are today commonly distinguished, namely on the one handself-awareness, and, on the other, empathic awareness of others, sometimestermed "inter-subjectivity". Turing's comment on Jefferson disregards thissecond aspect, and addresses only the first. The relevant excerpt from the Mindpaper is:

Page 5: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 5

4. Turing on the argument from consciousness

. . . according to the most extremeform of this view [that thought isimpossible without consciousness] the only way by which one couldbe sure that a machine thinks is to be the machine and to feeloneself thinking. One could then describe these feelings to theworld, but of course no one would be justifiedin taking any notice.Likewise according to this view the only way to know that a manthinks is to be that particular man. It is in fact the solipsist point ofview. It may be the most logical view to hold but it makescommunication of ideas difficult. A is liable to believe "A thinksbut B does not" whilst B believes "B thinks but A does not".Instead of arguing continually over this question it is usual to havethe polite convention that everyone thinks.

After a fragment of hypothetical dialogue illustrating the imitation game,Turing continues:

... I think that most of those who support the argument fromconsciousness could be persuaded to abandon it rather than beforced into the solipsist position. They will thenprobably be willingto accept our test.

He thus contributes a fourth position to the previous list.

Position 4. To the extent that possession of consciousness is not refutable inother people, we conventionally assume it. We should be equally ready toabandon solipsism for assessing thinking in machines.

Turing did not suggest that consciousness is irrelevant to thought, nor that itsmysterious nature and the confusions of educated opinion on the subjectshould be ignored. His point was simply that these mysteries and confusions donot have to be resolved before we can address questions of intelligence. Thenand since, it has been the AI view that purely solipsistic definitions based onsubjectively observed states are not useful. But Turing certainly under-estimated thepotential appeal of a more subtle form of solipsism generalisedtogroups of agents so as to avoid the dilemma which he posed. Following DanielC. Dennett [s], I term this variant the "charmed circle" argument. A relation-ship with the notion of inter-subjectivity, or socially shared consciousness (seeColwyn Trevarthen [25, 26]) can be brought out by revising the above-quotedpassage from Turing along some such lines as "the only way by which onecould be sure that a machine thinks is to be a member of a charmed circlewhich has accepted that machine into its ranks and can collectively feel itselfthinking." It is indeed according to just such social pragmatics that this issue islikely to be routinely adjudicated in the coming era of human-computercollaborative groups (see later).

Page 6: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

6 D. Michie

But notethat in thefirst of the two quotedpassages concerning the argumentfrom consciousness, the concluding sentence becomes truer to human society ifrephrased:

. . . it is usual to have the polite convention that everyone who isregarded as a person thinks.

Turing's uncorrected wording, quite unintentionally I believe, prejudgeswhether the polite convention would have led Aristotle to concede powers ofintelligent thought to women, or Australian settlers to Tasmanian aborigines,or nineteenthcentury plantation owners to their slaves. Nor, conversely, wouldTuring necessarily have wished to withhold the polite convention if someoneasserted: "My dog is a real person, and intelligent too." In his above-citedcontribution on consciousness to the Oxford Companion to the Mind, DanielDennett [5] puts the point well:

How do creatures differ from robots, real or imagined? They areorganically and biologically similar to us, and we are the paradig-matic conscious creatures. This similarity admits of degrees, ofcourse, and one's intuitions about which sorts of similarity countsare probably untrustworthy. Dolphins' fishiness subtracts from ourconviction, but no doubt should not. Were chimpanzees as dull assea slugs, their facial similarity to us would no doubt neverthelessfavour their inclusion in the charmed circle. If house-flies wereabout our size, or warm-blooded, we'd be much more confidentthat when we plucked off their wings they felt pain {our sort ofpain, the kind that matters).

At the outset of his paper Turing does briefly consider what may be termed theargument from dissimilarity. He dismisses the idea of "trying to make a'thinking machine' more human by dressing it up in artificial flesh", andcommends the proposed typewriter link as side-stepping a line of criticismwhich he sees as pointless. He evidently did not foresee the use of similarity todefine charmed circles of sufficient radius to deflect the accusation of narrowsolipsism. In view of the absence from his discussion of all reference to sharedaspects of conscious thought, he would probably have seen it as an evasion.

The "charmed circle" criterion disallows intelligence in any non-livingvehicle—so long as one is careful to draw the circle appropriately. But the ideaof inanimate intelligence is in any case a difficult one to expound, for humanintelligence is after all a product of the evolution of animals. This partlyexplains Turing's simplifying adoption of an engineering, rather than ascientific, scenario for his purpose. But the notion of performance trialsconducted by a doubting client also sorted naturally with Turing's temperamen-tal addiction to engineering images. Although himself conspicuously weak inpurely mechanical skills, he startled his associates with aflow of highly original

Page 7: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 7

practical innovations, home-built and usually doomed. The same addiction wasalso discernible at the abstract level, where his gifts were not hampered byproblems of physical implementation—notably in that most unlikely of purelymathematical constructions, the Universal Turing Machine itself.

Consciousness in artifacts

Searle himself is not insensitive to the appearance of special pleading. In hisearlier quoted Scientific American article [24] he writes:

. . . I have not tried to show that only biologically based systemslike our brains can think. Right now those are the only systems weknow for a fact can think, but we might find other systems in theuniverse that can produce conscious thoughts, and we might evencome to be able to create thinking systems artificially.I regard thisissue as up for grabs.

With these words he seems to accept the possibility of conscious thought inmachines. But we must remember that John Searle's objection relates only to

programmed machines or other symbol shufflers. According to the Searlecanon, (i) "thinking" implies that the system is conscious, and (ii) conscious-ness is not possible in a programmed device which can only shuffle symbols.What if two "other systems in the universe" are found with identical re-pertoires of intellectual behaviours, including behaviour suggestive of con-scious awareness? If one of the systems is implemented entirely in circuitry andthe other in software, then Searle gives the benefit of the doubt to the first as athinking system, but withholds it from the second as not being "really"conscious. What is to be done if some laboratory of the future builds a thirdsystem by faithfullyre-implementing theentire circuitry of thefirst as software,perhaps microcoded for speed? Not only the functionality of the first machinewould be reproduced, but also its complete and detailed logic and behaviouralrepertoire. Yet a strict reading of Searle forces the conclusion that, althoughthe scientifically testable attributes of conscious thought would be recon-structed in the new artifact, an intangible "something" would be lost, namely"true" consciousness. The latter could in principle be synthesized in circuitry(see Searle: "we might even come to be able to create thinking systemsartificially"),but not in software or other forms of symbol shuffling.

There is in this a counterpoint to earlier reflections by the Nobel Prize-winning neuroscientist Sir John Eccles [B], who held, like Searle today, that abeing's claim to be conscious may legitimately be ignored on grounds ofknowing that being's interior mechanisms:

We can, in principle, explain all . . . input-output performance interms of activity of neuronal circuits; and consequently, conscious-

Page 8: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

8 D. Michie

ness seems to be absolutely unnecessary! ... as neurophysiologistswe simply have no use for consciousness in our attempt to explainhow the nervous system works.

But for Eccles it was the "activity of neuronal circuits" rather than Searle'sshuffling of symbols that permitted appearances of consciousness to be dis-counted. Eccles has subsequently changed his view (see Popper and Eccles[21]), which we now see as having defined the limits of neurophysiologyprematurely. In the same way Searle's in-principle exclusion of possible futureAI artifacts risks prematurely defining the limits of knowledge-based program-ming. Like Eccles, Searle produces no workable definition of consciousness.

To rescue such positions from metaphysics, we must hope that a suitableSearle Test will be forthcoming to complement Turing's.

Subarticulate thought

In earlier generations students of intelligent behaviour generally ignored thephenomenon of consciousness. Some from the behaviourist camp denied thatthere was anything for the term to name. Turing's impact on psychology hasbeen to movethe study of cognition towards increasingly computation-orientedmodels. One of the ways in which these formulations differ from those ofbehaviourism is in allotting an important role to those aspects of consciousnesswhich are susceptible of investigation, including investigation via verbal report.But as new findings have multiplied, an awareness has grown of the com-plementary importance of other forms of thought and intelligence. Dennett [5]remarks as follows:

. . . the cognitive psychologist marshals experimental evidence,models, and theories to show that people are engaged in surprising-ly sophisticated reasoning processes of which they can give nointrospective account at all. Not only are minds accessible tooutsiders; some mental activities are more accessible to outsidersthan to the very "owners" of those minds!

Although having no access to these mental activities in the sense of directawareness of their operations, the explicitly conscious forms of mental calcula-tion enjoy intimate and instant access to their fruits. In many cases, asillustrated below with the problem of conjecturing the pronunciation ofimaginary English words, access to the fruits of subliminal cognition is anecessity, even though conscious awareness of the cognitive operations them-selves is not.

The Turing Test's typewriter link with the owners of two candidate mindsgives no direct access to the class of "silent" mental activity described byDennett. In the form put forward by Turing, the Test can directly detect only

Page 9: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 9

those processes which are susceptible of introspective verbal report. Does thisthen render it obsolete as a test of intelligence in machines?

The regretful answer is "Yes". The Test's didactic clarity is today sufferingerosion from developments in cognitive science and in knowledge engineering.These have already revealed two dimensions of mismatch, even before ques-tions of inter-subjectivity and socially expressed intelligence are brought intothe discussion:

(i) inability of the Test to bring into the game thought processes of kindswhich humans can perform but cannot articulate, and

(ii) ability of the Test to detect and examine a particular species of thoughtprocess ("cognitive skills") in a suitably programmed machine throughits self-articulations; similar access to human enactments of such pro-cesses is not possible because they are typically subarticulate.

Concerning (i), the idea that mental processes below the level of consciousand articulate thought are somehow secondary was dispatched with admirableterseness by A.N. Whitehead in 1911:

It is a profoundly erroneous truism . . . that we would cultivate thehabit of thinking what we are doing. The precise opposite is thecase. Civilisation advances by extending the number of importantoperations which we can perform without thinking about them. [28]

Moreover, although subcognitive skills can operate independently of consciousthought, the converse is by no means obvious. Indeed the processes of thoughtand language are dependent on such skills at every instant. The laboriouslythought-out, and articulate, responses of the beginner are successively replacedby those which are habitual and therefore effortless. Only when a skilledresponse is blocked by some obstacle is it necessary to "go back to firstprinciples" and reason things out step by step.

It is understandable that Turing should have sought to separate intelligencefrom the complications and ambiguities of the conscious/unconscious dich-otomy, and to define this mental quality in terms of communication mediawhich are characteristic of the academic rather than, say, the medical prac-titioner, the craftsman, or the explorer. But his Test has paid a price for thesimplification. At the very moment that a human's mastery of a skill becomesmost complete, he or she becomes least capable of articulating it and hence ofsustaining a convincing dialogue via the imitation game's remote typewriter.Consider, for example, the following.

A particularly elaborate cognitive skill is of life-and-death importance inMicronesia, whose master navigators undertake voyages of as much as 450miles between islands in a vast expanse of which less than two tenths ofone percent is land. We read in Hutchins' contribution to Gentner and Stevens' MentalModels [12]:

Page 10: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

10 D. Michie

Inasmuch as these navigators are still practicing their art, one maywell wonder why the researchers don't just ask the navigators howthey do it. Researchers do ask, but it is not that simple. As is thecase with any truly expert performance in any culture, the expertsthemselves are often unable to specify just what it is they do whilethey are performing. Doing the task and explaining what one isdoing require quite different ways of thinking.

We thus have the paradox that a Micronesian master navigator, engaging indialogue, let us suppose, via a typewriter link with the aid of a literateinterpreter, would lose most marks precisely when examined in the domain ofhis special expertise, namely long-distance navigation.

Procedural infrastructure of declarative knowledge

The Turing Test seems at first sight to side-step the need to implement suchexpertise by only requiring the machine to display general ability rather thanspecialist skills, the latter being difficult for their possessors to describeexplicitly. But the display of intelligence and thought is in reality profoundlydependent upon some of these. Perhaps the most conspicuous case is theability to express oneself in one's own language. Turing seems to have assumedthat by the time that a fair approach to mechanizing intelligence had beenachieved, this would somehow bring with it a certain level of linguisticcompetence, at least in the written language. Matters have turned out differ-ently. Partly this reflects the continuing difficulties confronting machine repre-sentation of the semantics and pragmatics (i.e., the "knowledge" components)of discourse, as opposed to the purely syntactic components. Partly it reflectsthe above-described inaccessibility to investigators of the laws which governwhat everyone knows how to do.

In spoken discourse, we find the smooth and pervasive operation of linguisticrules by speakers who are wholly ignorant of them! Imagine, for example, thatthe administrator of a Turing Test were to include the following, adapted fromPhilip Johnson-Laird's The Computer and the Mind [14]:

Question: How do you pronounce the plurals of the imaginary Englishwords: "platch", "snorp", and "brell"?

A human English speaker has little difficulty in framing a response over thetypewriter link.

Answer: I pronounce them as "platchez", "snorpss", and "brellz

The linguist Morris Halle [11] has pointed out that to form these plurals aperson must use unconscious principles. According to Allen, Hunnicutt andKlatt [I], subliminal encoding of the following three rules would be sufficient:

Page 11: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 11

(1) If a singular noun ends in one of the phonetic segments /s/, /z/, /sh/,/zh/, /eh/, or /j/, then add the ez sound.

(2) If a singular noun ends in one of the phonetic segments IfI, /k/, Ipl, It/,or /th/, then add the ss sound.

(3) In any other case, add the z sound.

What about a machine? In this particular case programmers acquainted withthe phonetic laws of English "s" pluralizations might have forearmed its rulebase. But what about the scores of other questions about pronunciation againstwhich they could not conceivably have forearmed it, for the sufficient reasonthat phonetic knowledge of the corresponding unconscious rules has not yetbeen formulated? Yet a human, any English-speaking human capable oftypewriter discourse, would by contrast shine in answering such questions.

If account is taken of similar domains of discourse ranging far fromlinguistics, each containing thousands of subdomains within which similar"trick questions" could be framed, the predicament of a machine facinginterrogation from an adversarial insider begin to look daunting. We see in thisa parable of futility for the attempt to implement intelligence solely by thedeclarative knowledge-based route, unsupported by skill-learning. A version ofthis parable, expensively enacted by the Japanese Fifth Generation, is dis-cussed in my contribution [17] to Rolf Herken's The Universal Turing Machine.

Among the brain's various centres and subcentres only one, localised in thedominant (usually the left) cerebral hemisphere, has the ability to reformulatea person's own mental behaviour as linguistic descriptions. It follows thatTuring's imitation game can directly sample from the human player only thosethought processes accessible to this centre. Yet as Hutchins reminds us in thecontext of Micronesian navigation, and Johnson-Laird in the context of Englishpronunciation, most highly developed mental skills are of the verbally inacces-sible kind. Their possessors cannot answer the question "What did you do inorder to get that right?"

This same hard fact was recently and repeatedly driven home to technolo-gists by the rise and fall in commercial software of the "dialogue-acquisition"school of expert systems construction. The earlier-mentioned disappointmentwhich overtook this school, after well publicised backing from governmentagencies of Japan and other countries, could have been avoided by acceptanceof statements to be found in any standard psychological text. In John Ander-son's Cognitive Psychology and Its Implications [2] the chapter on developmentof expertise speaks of the autonomous stage of skill acquisition:

Because facility in the skill increases, verbal mediation in theperformance of the task often disappears at this point. In fact, theability to verbalize knowledge of the skill can be lost altogether.

Against this background, failure of attempts to build large expert systems by

Page 12: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

D. Michie12

"dialogue acquisition" of knowledge from the experts can hardly be seen assurprising. It is sufficient here to say that many skills are learned by means thatdo not require prior description, and to give a concrete example from commonexperience. For this I have taken from M.I. Posner's Cognition: An Intro-duction [22] an everyday instance which can easily be verified:

If a skilled typist is asked to type the alphabet, he can do so in afew seconds and with very low probability of error, If, however, heis given a diagram of his keyboard and asked to fill in the letters inalphabet order, he finds the task difficult. It requires severalminutes to perform and the likelihood of error is high. Moreover,the typist often reports that he can only obtain the visual location ofsome letters by trying to type the letter and then determining wherehis finger would be. These observations indicate that experiencewith typing produces a motor code which may exist in the absenceof any visual code.

Imagine, then, a programming project which depends on eliciting a skilledtypist's knowledge of the whereabouts on an unlabelled keyboard of the lettersof the alphabet. The obvious short cut is to ask him or her to type, say, "thequick brown fox jumps over the lazy dog" and record which keys are typed inresponse to which symbols. But you must ask the domain experts what theyknow, says the programmer's tribal lore, not learn from what they actually do.Even in the typing domain this lore would not work very well. With the morecomplex and structured skills of diagnosis, forecasting, scheduling, and design,it has been even less effective. Happily there is another path to machineacquisition of cognitive skills from expert practitioners, namely: analyse whatthe expert does; then ask him what he knows. As reviewed elsewhere [18], anew craft of rule induction from recording what the expert does is now thebasis of commercial operations in Britain, America, Scandinavia, continentalEurope, and Japan.

Psychologists speak of "cognitive skill" when discussing intensively learnedintuitive know-how. The term is misleading on two counts. First, the word"cognitive" sometimes conveys a connotation of "conscious", inappropriatewhen discussing intuitive processes. Second, the term carries an implication ofsome "deep" model encapsulating the given task's relational and causalstructure. In actuality, just as calculating prodigies are commonly ignorant oflogical and number-theoretical models of arithmetic, so skilled practitioners inother domains often lack developed mental models of their task environments.I follow French [9] in using the term "subcognitive" for these procedurallyoriented forms of operational knowledge.

Knowledge engineers sometimes call subcognitive know-how "compiledknowledge", employing a metaphor which misdirects attention to a conjec-tured top-down, rather than data-driven, route of acquisition. But whicheveracquisition route carries the main traffic, collectively, as remarked in the

Page 13: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 13

passage from Whitehead, such skills account for the preponderating part, and agrowing part as our technical culture advances, of what it is to be an intelligenthuman. The areas of the human brain to which this silent corpus is consignedare still largely conjectural and may in large part even be subcortical. A secondcontrast between articulate and inarticulate cerebral functions is that betweenthe verbally silent (usually the right) cerebral hemisphere, and the logical andarticulate areas located in left hemisphere. Right-brain thinking notably in-volves spatial visualization and is only inarticulate in the strict sense ofsymbolic communication: sublinguistic means of conveying mood and intentare well developed, as also are important modalities of consciousness. Forfurther discussion the reader is referred to Popper and Eccles [21]—see alsolater in this paper.

The superarticulacy phenomenon

Two dimensions were earlier identified along which the Turing Test cantoday be seen as mismatched to its task, even if we put aside forms ofintelligence evoked in inter-agent cooperation which the Test does not addressat all. The first of these dimensions reveals the imitation game's inability todetect, in the human, thought processes of kinds which humans cannotarticulate. We now turn to the second flaw, namely that the Test can catch inits net thought processes which the machine agent can articulate, but shouldnot if it is to simulate a human.

What is involved is the phenomenon of machine "superarticulacy". Asuitably programmed computer can inductively infer the largely unconsciousrules underlying an expert's skill from samples of the expert's recordeddecisions. When applied to new data, the resulting knowledge-based systemsare often capable of using their inductively acquired rules to justify theirdecisions at levels of completeness and coherence exceeding what little ar-ticulacy the largely intuitive expert can muster. In such cases a Turing Testexaminer comparing responses from the human and the artificial expert canhave little difficulty in identifying the machine's. For so skilled a task, nohuman could produce such carefully thought-out justifications! A test whichcan thus penalise important forms of intelligent thought must be regarded asmissing its mark. In my Technology Lecture at the London Royal Society [16],I reviewed the then-available experimental results. Since that time, IvanBratko and colleagues [3] have announced a more far-reaching demonstration,having endowed clinical cardiology with its first certifiably complete, correct,and fully articulate, corpus of skills in electrocardiogram diagnosis, machine-derived from a logical model of the heart. In a Turing imitation game theKARDIO system would quickly give itself away by revealing explicit knowl-edge of matters which in the past have always been the preserve of highlytrained intuition.

Page 14: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

14 D. Michie

Of course a clever programming team could bring it about that the machinewas as subarticulate about its own processes as we humans. In a cardiologicalversion of the Turing Test, Bratko and his colleagues could enter a suitablycrippled KARDIO system. They would substitute a contrived error rate in thesystem's clinical decisions for KARDIO's near-zero level. In place of KAR-DIO's impeccablyknowledgeable commentaries, they might supply a generatorof more patchy and incoherent, and hence more true-to-life explanations. Atthe trivial level of arithmetical calculation, Turing anticipated such "playingdumb" tactics.

It is claimed that the interrogator could distinguish the machinefrom the man simply by setting them a number of problems inarithmetic. The machine would be unmasked because of its deadlyaccuracy. The reply to this is simple. The machine . . . would notattempt to give the right answers to the arithmetical problems. Itwould deliberately introduce mistakes in a manner calculated toconfuse the interrogator.

But at levels higher than elementary arithmetic, as exemplified by KARDIO'ssophisticated blend of logical and associative reasoning, surely one shouldjudgea test as blemished if it obliges candidates to demonstrate intelligence byconcealing it!

Classical AI and right-brain intelligence

Findings from "split-brain" studies have identified the co-existence in thetwo cerebral hemispheres of separate and under normal conditions mutuallycooperating centres of mental activity. The deconnected right hemisphere(surgically separated from the left) is relatively weak in the faculties of logicand language, and is usually incapable of verbal report. Description of it as acentre of "consciousness" may therefore be questioned. But Roger WalcottSperry, cited by Popper and Eccles in their book The Self and Its Brain [21],regards it as

a conscious system in its own right, perceiving, thinking, remember-ing, reasoning, willing, and emoting, all at a characteristicallyhuman 1eve1 .... Though predominantly mute and generally in-ferior in all performances involving language or linguistic or mathe-matical reasoning, the minor hemisphere is nevertheless thesuperior cerebral member for certain types of tasks. If we re-member that in the great majority of tests it is the disconnected lefthemisphere that is superior and dominant, we can review quicklynow some of the kinds of exceptional activities in which it is theminor hemisphere that excels. First, of course, as one would

Page 15: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 15

predict, these are all nonlinguistic nonmathematical functions.Largely they involve the apprehension and processing of spatialpatterns, relations and transformations. They seem to be holisticand unitary rather than analytic and fragmentary, and orientationalmore than focal, and to involve concrete perceptual insight ratherthan abstract, symbolic, sequential reasoning.

One should add in particular that theright hemisphere supports therecognitionand appreciation of music and pictures—both important forms of humancommunication and cultural transmission. In his book with Karl Popper, JohnEccles refers to a case reported by Gott of surgical excision of the righthemisphere, which "occurred in a young woman who was a music major andan accomplished pianist. After the operation there was a tragic loss of hermusical ability. She could not carry a tune but could still repeat correctly thewords of familiar songs."

In their survey of these questions, Popper and Eccles distinguish self-consciousness, located in the left hemisphere, from other forms of conscious-ness. In DialogueV, Popper says:

I can't help feeling . . . that self-consciousness is somehow ahigher development of consciousness, and possibly that the righthemisphere is conscious but not self-conscious, and that the lefthemisphere is both conscious and self-conscious. It is possible thatthe main function of the corpus callosum is, so to speak, to transferthe conscious—but not self-conscious—interpretations of the righthemisphere to the left, and of course, to transfer something in theother direction too.

The corpus callosum is the two-way connecting bundle between the cerebralhemispheres. It consists of some hundreds of millions of nerve fibres, estimatedto carry a total traffic of 4 x 109 impulses per second. It is this massivehigh-speed highway which has been surgically interrupted in the humansubjects studied under the name "split-brain".

Imagine now that Turing's Test were one day to be passed by a machinesimulating the kind of intelligence which remained intact in Gott's youngwoman patient after her operation. It is in this kind of intelligence that AIscientists have so far demonstrated most interest and success. After seeing theimitation game mastered under the special condition that the human player (asGott's woman patient) functions in left-brain-only mode, a critic may declarethat what we now need is a machine that can be certificated for possession ofright-brain mental skills. Such a requirement would surely overwhelm thecombined efforts of the world's computing community into the indefinitefuture. The following reasons for intractability come to mind.

The isolated right brain is rated by Eccles as "having a status superior to thatof the non-human primate brain". Without a linguistic link, a first step would

Page 16: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

16 D. Michie

need to be implementation of the kind of versatile trainability on which ashepherd relies in developing intelligent behaviour in his sheep-dog. "Specialtesting methods with non-verbal expression", to use words taken from Sperry,are used for clinical study of human subsymbolic thinking. Equivalent methodswould need to be developed for work with artificial right-brain intelligences.Imagination reels at the thought of the prodigies of innovative mechanical andelectronic engineering required to build a mobile artefact which could beprogrammed to compete in international sheep-dog trials, or, say, which couldsupport the behaviour required of a dumb but capable robot footman! There isa persistent suggestion that Turing (to quote the words of an anonymousinformant) "also considered a robotics-style test, not one involving 'deception'of a human examiner; rather, it involved seeing to what extent the robot couldsucceed in the completion of a range of tasks by acting appropriately in anatural range of environments." There is a point of historical interest here.Although I cannot confirm this account from personal recollection, it fits inwith a tale which I have related elsewhere [15] of Turing's early post-war daysat NPL: "Turing", his colleagues said, "is going to infest the countryside with arobot which will live on twigs and rust!" Such things are still in thefar future ofthe art of rule-based programming. Even the proprioceptive control of a singlerobot limb at present constitutes an almost crushing challenge. H. McllvaineParsons' [20] proposals are in the same direction, and contain much of interest.

Perhaps the symbolic school of AI should consider a treaty with their neuralnet and connectionist colleagues. Each party might renounce its claim to theother's hemispheric area, and concentrate on the cognitive functions which liewithin its competence. This said, the time is neverthelessripe for a symbolical-ly oriented foray into one particular silent area, namely the inductive extrac-tion of articulate models from behavioural traces of inarticulate skills.

Consciousness and human-computer interaction

Only the dominant hemisphere is logically and linguistically equipped torespond to Turing's Test. But the other hemisphere's functions include com-plementary forms of thought and intelligence. To complicate matters, aperson's stream of awareness becomes available to others only when thatperson's (left-brain) discourse draws upon selected traces of conscious ex-perience in memory. Moreover such traces partly consist of after-the-eventrationalisations and confabulations. Possibly one of the needs catered for bythe editing process is mnemonic. Another may be the need to explain one'sfeelings and/or actions to self and others. Such possibilities are consistent withTrevarthen's [26] account in The Oxford Companion to the Mind:

The brain is adapted to create and maintain human society andculture by two complementary conscious systems. Specialized mo-

Page 17: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 17

fives in the two hemispheres generate a dynamic partnership be-tween the intuitive, on the one side, and the analytical or rational,on the other, in the mind of each person. This difference appears to

have evolved in connection with the human skills of intermentalcooperation and symbolic communication.

As with scientific publication from an experienced laboratory, what appears inconscious recollection seems to be not so much a panoptic video of everythingthat went on, but rather a terse documentary, edited to highlight and establisheach main conclusion. This view of the matter has survived without seriouschallenge since its formulation by William James [13] just a century ago:

We see that the mind is at every stage a theatre of simultaneouspossibilities. Consciousness consists in the comparison of thesewitheach other, the selection of some, and the suppression of others, ofthe rest by the reinforcing and inhibiting agency of attention. Thehighest and most celebrated mental products are filtered from thedata chosen by the faculty below that, which mass was in turn siftedfrom a still larger amount of yet simpler material, and so on. Themind, in short, works on the data it received much as a sculptorworks on his block of stone. In a sense, the statue stood there frometernity. But there were a thousand different ones beside it. Thesculptor alone is to thank for having extricated this one from therest.

James' image of the sculptor postulates selective data destruction. Recentlaboratory studies summarized by Dennett and Kinsbourne [7] indicate aslightly different metaphor. Rather than the sculptor's block of stone, Dennettproposes the writer's "multiple drafts" as a model of conscious recollection—for a comprehensive account, see his Consciousness Explained [6]. As far as

can be, the mind's inbuilt editor squeezes parallel streams of perception into asingle "story line", not refraining evenfrom re-arranging temporal sequences.The following account comes from Dennett and Kinsbourne's paper.

The cutaneous "rabbit" .... The subject's arm rests cushioned on atable, and mechanical square-wave tappers are placed at two orthree locations along the arm, up to a foot apart. A series of taps inrhythm are delivered by the tappers, e.g. 5 at the wrist followed by2 near the elbow and then 3 more on the upper arm. The taps aredelivered with interstimulus intervals between 50 and 200 msec. Soa train of taps might last less than a second, or as much as two orthree seconds. The astonishing effect is that the taps seem to thesubjects to travel in regular sequences over equidistant points upthe arm—as if a little animal were hopping along the arm. Now howdid the brain know that after the 5 taps to the wrist, there were

Page 18: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

18 D. Michie

going to be some taps near the elbow? The experienced"departure" of the taps from the wrist begins with the second tap,yet in catch trials in which the later elbow taps are never delivered,all five wrist taps are felt at the wrist in the expected manner. Thebrain obviously cannot "know" about a tap at the elbow until afterit happens. Perhaps, one might speculate, the brain delays theconscious experience until after all the taps have been "received"and then, somewhere upstream of the seat of consciousness (what-ever that is), revises the data tofit a theory of motion, and sends theedited version on to consciousness.

We guide the compressions and filterings, and manage the unavoidabledistortions of the story-building, with the aid of an intellectual tool developedexpressly for the purpose, the notion of causality—absent, as Russell [23] wasthe first to point out, from the explicit formulations of classical physics. Thisnotion can be seen as a cognitive device for trivializing the computations whileextracting the story that we can most easily understand. It follows that causeand effect may be differently attributed by different rational observers of thesame events. R.E. Ornstein [19] in his The Psychology of Consciousness quotesfrom an Islamic tale:

"What is Fate?" Nasrudin was asked by a scholar."An endless succession of intertwined events, each influencing theother.""That is hardly a satisfactory answer. I believe in cause and effect.""Very well," said the Mulla, "look at that." He pointed to aprocession passing in the street."That man is being taken to be hanged. Is that because someonegave him a silver piece and enabled him to buy theknife with whichhe committed the murder; or because someone saw him do it; orbecause nobody stopped him?"

Design pragmatics of intelligent awareness

To many, the incommunicable (and less testable) experiences seem evenmore vitally important than the communicable aspects of consciousness. Butfor detecting and measuring intelligence in other agents we have to substitutefor the full concept a restricted notion of "operational awareness", i.e., thetestable aspect. This notion is easier to integrate into scientific usage. More-over, news from the market-place indicates an unexpected application in thetechnology of graphical user interfaces for personal computers and worksta-tions. Developments are in train in the laboratories of a number of large

Page 19: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 19

#

i

corporations for animated figures to appear on the screen. These personalagents are programmed to simulate awareness and intelligent interest, articu-lately guiding the interactive user through the operating system. NicholasNegroponte, who directs MlT's Media Lab, remarked in the Byte magazine ofDecember 1989: "Today, the PC is driven by the desk-top metaphor, but thatscenario will disappear and be replaced by a theatrical metaphor. Usersliterally will see on their screens little expert agents who will do things forthem."

It seems possible, then, that the software industry may come to be a moreexacting designer and taskmaster of imitation games than Turing's or any otheracademically conceived test of machine intelligence. If the graphically dis-played figures fail to muster a sufficiently convincing show of consciousawareness, users will no doubt complain to the manufacturers that their agentsdo not understand them. To address customer dissatisfaction of this kind, suchagents will need programs capable of signalling not only coherent attention,but intentions and feelings. Negroponte's "little expert agents" must conveythe motives of a teacher, or they will fail.

Conversely, simulated agents may arouse complaints that their displayedawareness, although attentive, is stilted and socially obtuse. The dimension ofsocial intelligence has been noticeably absent from most discussions. Yetpresent-day workstation designers envisage networking environments for multi-way cooperation among members of mixed task groups. Humans and machinesare expected to pool diverse specialist skills in a common attack on eachproblem. Social intelligence by definition escapes through the net of one-on-one testing by dialogue. A version of the imitation game which can assess thisform of intelligence needs to have the examiner communicate with teams—teams of humans, of knowledge-based robots, and (most interestingly) teamsof mixed composition. Some of the design considerations go quite deep. Inassessing cooperative behaviours we have to analyse, suitably translated intothe realm of robot intelligence, such intimate inter-agent skills as committeework, cooperative construction of artefacts, and the collective interaction ofclassmates with each other and with teachers.

For forty years AI has followed the path indicated by the original form of theTest. It is no longer premature to consider extended modalities, designed toassess creative thought, subliminal and "silent" forms of expertise, socialintelligence, and the ability of one intelligent agent to teach another and inturn be taught.

The imitation game in review

Let us now look back on the relation of conscious thought to Turing'simitation game.

Page 20: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

20 D. Michie

(i)

(ii)

(hi)

(iv)

(v)

(vi)

*

*

Turing left open whether consciousness is to be assumed if a machinepasses the Test. Some contemporary critics only attribute "intelligence"to conscious processes. An operational interpretation of this require-ment is satisfied if the examiner can elicit from the machine via itstypewriter no less evidence of consciousness than he or she can elicit viathe human's typewriter. We thus substitute the weaker (but practical)criterion of "operational awareness", and ask the examiner to ensurethat this at least is tested.In addition to strategies based on an intelligent agent's deep models("understanding", "relational knowledge") we find intrinsically differ-ent strategies based on heuristic models ("skill", "know-how"). Theoutward and visible operations of intelligence depend critically uponintegrated support from the latter, typically unconscious, processes.Hence in preparing computing systems for an adversely administeredTuring Test, developers cannot dodge attempting to implement theseprocesses. The lack of verbal report associated with them in humanexperts can be circumvented in some cases by computer induction fromhuman decision examples. In others, however, the need for exoticphysical supports for input-output behaviour would present seriousengineering difficulties.Conversely, where inductive inference allows knowledge engineers toreconstruct and incorporate the needed skill-bearing rules, it becomespossible to include a facility of introspective report which not uncom-monly out-articulates the human possessors of these same skills. Such"superarticulacy" reveals a potential flaw in the Turing Test, whichcould distinguish the machine from the human player on the paradoxi-cal ground of the machine's higher apparent level of intelligentawareness.In humans, consciousness supports the functions of communicating withothers, and of predicting theirresponses. As a next step in user-friendlyoperating systems, graphically simulated "agents" endowed with prag-matic equivalents of conscious awareness are today under developmentby manufacturers of personal computers and workstations. At this pointAI comes under pressure to consider how emotional components maybe incorporated in models of intelligent communication and thought.Extensions to the Turing Test should additionally address yet moresubtle forms of intelligence, such as those involved in collective problemsolving by cooperating agents, and in teacher-pupil relations.By the turn of the century, market pressures may cause the designers ofworkstation systems to take over from philosophers the burden ofsetting such goals, and of assessing the degree to which this or thatsystem may be said to attain them.

Page 21: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

Turing's Test and conscious thought 21

*_

k

.

Acknowledgement

In preparing and revising this review I was assisted by facilities at the TuringInstitute, Glasgow, UK, and from Virginia Polytechnic Institute and StateUniversity during tenure of a CC. Garvin Endowed Visiting Professorship. Iowe thanks to colleagues at both these institutions, and also to the referees fortheir helpful criticisms. I also had the great benefit of comments on an earlydraft from Professor Colwyn Trevarthen and from Dr Patricia Churehland,both of whom pointed me to relevant materials and sources.

References

[1] J. Allen, M.S. Hunnicutt and D. Klatt, From Text to Speech: The MlTalkSystem (CambridgeUniversity Press, Cambridge, England, 1987).

[2] J.R. Anderson, Cognitive Psychology and Its Implications (Freeman, New York, 3rd cd.,1990).

[3] I. Bratko, I. Mozetic and N. Lavrac, Kardio: A Study in Deep and QualitativeKnowledge forExpert Systems (MIT Press, Cambridge, MA, 1989).

[4] B.E. Carpenter and R.W. Doran, eds., A.M.Turing's Ace Report and Other Papers (MITPress, Cambridge, MA, 1986).

[5] D.C. Dennett,

Consciousness,

in: R.L. Gregory, cd., The Oxford Companion to the Mind(Oxford University Press,

Oxford,

1987).[6] D.C. Dennett, Consciousness Explained (Little, Brown &

Co.,

Boston, MA, 1992).[7] D.C. Dennett and M.Kinsbourne, Time and the observer: the where andwhen ofconscious-

ness in the brain, Behav. Brain Sci. (to appear).[8] J.C. Eccles (1964), Cited in: R.W. Sperry, Consciousness and causality, in: R.L. Gregory,

cd., The Oxford Companion to the Mind (Oxford University Press,

Oxford,

1987).[9] R.M. French, Subcognition and the limits of the Turing Test, Mind 99 (1990) 53-65.

[10] R.L. Gregory, The Oxford Companion to the Mind (Oxford University Press,

Oxford,

1987).[11] M. Halle,Knowledge unlearned and untaught: what speakersknow about the sounds oftheir

language, in: M. Halle, J. Bresnan and G.A. Miller, eds., Linguistic Theory and Psychologi-cal Reality (MIT Press, Cambridge, MA, 1978).

[12] E. Hutchins, Understanding Micronesian navigation, in: D. Gentner and A.

Stevens,

eds.,Mental Models (Erlbaum, Hillsdale, NJ, 1983) 191-225.

[13] W. James, The Principles of Psychology (Dover, New York, 1950) (first published 1890).[14] P.N. Johnson-Laird, The Computer and the Mind (Harvard University Press, Cambridge,

MA, 1988).[15] D. Michie, Editorial introduction to A.M. Turing's chapter "Intelligent machinery" in: B.

Meltzer and D. Michie, eds., MachineIntelligence 5 (Edinburgh University Press, Edinburgh,

Scotland,

1969).[16] D. Michie, The superarticulacy phenomenon in the context of software

manufacture,

Proc.Roy. Soc. Lond. A 405 (1986) 185-212; also in: D. Partridge and Y. Wilks, eds., TheFoundations of Artificial Intelligence (Cambridge University Press, Cambridge, England,1990) 411-439.

[17] D. Michie, The Fifth Generation'sunbridged gap, in: R. Herken, cd., The Universal TuringMachine (Oxford University Press,

Oxford,

1988).[18] D. Michie, Methodologies from machine learning in data analysis and software, Computer J.

34 (6) (1991) 559-565.[19] R.E.

Ornstein,

The Psychology of Consciousness (Harcourt, Brace and Jovanovich, 1977) 96;(Freeman, New York, Ist cd., 1972).

Page 22: Turing's Test andconscious thoughtrv906kh3783/rv906...20, 1947.Thelecturerwas AlanTuring.Histopicwas thenatureofprogram mable digital computers, taking as hisexemplar the "Automatic

22 D. Michie

[20]

[21]

[22][23][24][25]

[261

[27][28]

.

t

*

.

H.M. Parsons, Turing on the Turing Test, in: W. Karwowski and M. Rahimi, eds.,Ergonomics ofHybrid Automated Systems //(Elsevier Science Publishers, Amsterdam, 1990).K.R. Popper and J.C. Eccles, The Self and Its Brain (Routledge and Kegan Paul, London,1977).M.L Posner, Cognition: An Introduction (Scott, Foresman,

Glenview,

IL, 1973).B. Russell, On the notion of cause, Proc. Aristotelian Soc. 13 (1913) 1-26.J.R.

Searle,

Is the brain's mind a computer program? Sci. Am. 262 (1990) 20-25.C. Trevarthen, The tasks of consciousness: how could the brain do them? Brain and Mind,Ciba Foundation Series 69 (New Series) (Excerpta Medical/Elsevier North-Holland, Am-sterdam, 1979).C. Trevarthen, Split-brain and the mind, in: R.L. Gregory, cd., The Oxford Companion tothe Mind (Oxford University Press,

Oxford,

1987).A.M. Turing, Computing machinery and intelligence, Mind 59 (1950) 433-460.A.N. Whitehead, An Introduction to Mathematics (1911).